The Solution of Linear Complementarity Problems on an Array Processor.
1981-01-01
WISCONSIN-MADISON MATHEMATICS RESEARCH CENTER THE SOLUTION OF LINEAR COMPLEMENTARITY PROBLEMS ON AN ARRAY PROCESSOR C. W. Cryer* ’ 1 , P. M. Flanders...8217, D. J. Hunt+ , S. F. Reddaway+ , and J. Stansbury**’ 1 Technical Summary Report #2170 January 1981 ABSTRACT The Distributed Array Processor (DAP...manufactured by International Computers Limited is an array of 1 -bit 200-nanosecond processors. The Pilot DAP on which the present work was done is a 32
On Solving Linear Complementarity Problems as Linear Programs.
1976-03-01
obtainedb, •,. b ,.,. the linear program minimize rTv . subject to q + Yv > 0, Xv > 0 19 ,., for any positive vector r Rn Letting x Xv, we see that x is...1.21 1.6 3352 .81 1.7 2353 .65 .8 1490 .50 1.9 664 .37 ti TABLE 4 ,43 i Inputs: n1 7. qT (,-1, -4, 6, -5, 3, -2) starting iterate x y 0 ; original
Solving of variational inequalities by reducing to the linear complementarity problem
NASA Astrophysics Data System (ADS)
Gabidullina, Z. R.
2016-11-01
We study the variational inequalities closely connected with the linear separation problem of the convex polyhedrain the Euclidean space. For solving of these inequalities, we apply the reduction to the linear complementarity problem. Such reduction allows one to solve the variational inequalities with the help of the Matlab software package.
1980-10-01
THE SOLUTION OF LINEAR COMPLEMENTARITY PROBLEMS ARISING FROM FREE BOUNDARY PROBLEMS Achi Brandt*’ ( 1 ) and Colin W. Cryer’ (2 1.1 INTRODUCTION...University of Wisconsin-Madison, Madison, Madison, WI 53706. ( 1 )Sponsored by the United States Army under Contract No. DAAG29-80-C-0041. (2)Sponsored by...multi- plying (1.3a) by - 1 . For example, if t is the Laplace operator in R 2 , then a possible choice for L would be the classical five-point difference
A Self-Adaptive Projection and Contraction Method for Linear Complementarity Problems
Liao Lizhi Wang Shengli
2003-10-15
In this paper we develop a self-adaptive projection and contraction method for the linear complementarity problem (LCP). This method improves the practical performance of the modified projection and contraction method by adopting a self-adaptive technique. The global convergence of our new method is proved under mild assumptions. Our numerical tests clearly demonstrate the necessity and effectiveness of our proposed method.
1983-08-01
earlier paper, our present analysis applies only to the symmetric linear complementarity problem . Various applications to a strictly convex quadratic...characterization requires no such constraint qualification as (F). There is yet another approach to apply an iterative method for solving the strictly convex ...described a Lagrangian relaxation algorithm for a constrained matrix problem which is formulated as a strictly convex separable quadratic program. They
Fernandes, L.; Friedlander, A.; Guedes, M.; Judice, J.
2001-07-01
This paper addresses a General Linear Complementarity Problem (GLCP) that has found applications in global optimization. It is shown that a solution of the GLCP can be computed by finding a stationary point of a differentiable function over a set defined by simple bounds on the variables. The application of this result to the solution of bilinear programs and LCPs is discussed. Some computational evidence of its usefulness is included in the last part of the paper.
1987-05-01
Laboratory •U The Equivmluce of Dentig’$ Self-Dual Parametric Algorithm for Linemr Progame to Lems Ngorithm fur Lse Cemllusemtawt Problems Appled to Umer...agement Science 11, pp. 681-689. Lemke, C.E. (1970). "Recent results on complementarity problems," in Nonlinear pro- gramming (J.B. Rosen, O.L
New Existence Conditions for Order Complementarity Problems
NASA Astrophysics Data System (ADS)
Németh, S. Z.
2009-09-01
Complementarity problems are mathematical models of problems in economics, engineering and physics. A special class of complementarity problems are the order complementarity problems [2]. Order complementarity problems can be applied in lubrication theory [6] and economics [1]. The notion of exceptional family of elements for general order complementarity problems in Banach spaces will be introduced. It will be shown that for general order complementarity problems defined by completely continuous fields the problem has either a solution or an exceptional family of elements (for other notions of exceptional family of elements see [1, 2, 3, 4] and the related references therein). This solves a conjecture of [2] about the existence of exceptional family of elements for order complementarity problems. The proof can be done by using the Leray-Schauder alternative [5]. An application to integral operators will be given.
Generalized quasi-variational inequality and implicit complementarity problems
Yao, Jen-Chih.
1989-10-01
A new problem called the generalized quasi-variational inequality problem is introduced. This new formulation extends all kinds of variational inequality problem formulations that have been introduced and enlarges the class of problems that can be approached by the variational inequality problem formulation. Existence results without convexity assumptions are established and topological properties of the solution set are investigated. A new problem called the generalized implicit complementarity problem is also introduced which generalizes all the complementarity problem formulations that have been introduced. Applications of generalized quasi-variational inequality and implicit complementarity problems are given. 43 refs.
A basic theorem of complementarity for the generalized variational-like inequality problem
Yao, Jen-Chih.
1989-11-01
In this report, a basic theorem of complementarity is established for the generalized variational-like inequality problem introduced by Parida and Sen. Some existence results for both generalized variational inequality and complementarity problems are established by employing this basic theorem of complementarity. In particular, some sets of conditions that are normally satisfied by a nonsolvable generalized complementarity problem are investigated. 16 refs.
Levenberg-Marquardt method for the eigenvalue complementarity problem.
Chen, Yuan-yuan; Gao, Yan
2014-01-01
The eigenvalue complementarity problem (EiCP) is a kind of very useful model, which is widely used in the study of many problems in mechanics, engineering, and economics. The EiCP was shown to be equivalent to a special nonlinear complementarity problem or a mathematical programming problem with complementarity constraints. The existing methods for solving the EiCP are all nonsmooth methods, including nonsmooth or semismooth Newton type methods. In this paper, we reformulate the EiCP as a system of continuously differentiable equations and give the Levenberg-Marquardt method to solve them. Under mild assumptions, the method is proved globally convergent. Finally, some numerical results and the extensions of the method are also given. The numerical experiments highlight the efficiency of the method.
A second order cone complementarity approach for the numerical solution of elastoplasticity problems
NASA Astrophysics Data System (ADS)
Zhang, L. L.; Li, J. Y.; Zhang, H. W.; Pan, S. H.
2013-01-01
In this paper we present a new approach for solving elastoplastic problems as second order cone complementarity problems (SOCCPs). Specially, two classes of elastoplastic problems, i.e. the J 2 plasticity problems with combined linear kinematic and isotropic hardening laws and the Drucker-Prager plasticity problems with associative or non-associative flow rules, are taken as the examples to illustrate the main idea of our new approach. In the new approach, firstly, the classical elastoplastic constitutive equations are equivalently reformulated as second order cone complementarity conditions. Secondly, by employing the finite element method and treating the nodal displacements and the plasticity multiplier vectors of Gaussian integration points as the unknown variables, we obtain a standard SOCCP formulation for the elastoplasticity analysis, which enables the using of general SOCCP solvers developed in the field of mathematical programming be directly available in the field of computational plasticity. Finally, a semi-smooth Newton algorithm is suggested to solve the obtained SOCCPs. Numerical results of several classical plasticity benchmark problems confirm the effectiveness and robustness of the SOCCP approach.
Pseudo-Monotone Complementarity Problems in Hilbert Space
1990-07-01
have the same solution set . Therefore, one approach to studying NCP is by studying VIP over closed convex cones. The purpose of this paper is to use...interior and relative boundary of B in K, respectively. The set K\\B denotes the complement of B in K. A subset of a Hilbert space is said to be 2...conditions for the existence of solutions to the variational inequality problem for unbounded sets . Theorem 2.2. Let K be a closed convex subset of the
NASA Astrophysics Data System (ADS)
Júdice, Joaquim; Raydan, Marcos; Rosa, Silvério; Santos, Sandra
2008-04-01
This paper is devoted to the eigenvalue complementarity problem (EiCP) with symmetric real matrices. This problem is equivalent to finding a stationary point of a differentiable optimization program involving the Rayleigh quotient on a simplex (Queiroz et al., Math. Comput. 73, 1849-1863, 2004). We discuss a logarithmic function and a quadratic programming formulation to find a complementarity eigenvalue by computing a stationary point of an appropriate merit function on a special convex set. A variant of the spectral projected gradient algorithm with a specially designed line search is introduced to solve the EiCP. Computational experience shows that the application of this algorithm to the logarithmic function formulation is a quite efficient way to find a solution to the symmetric EiCP.
An NE/SQP method for the bounded nonlinear complementarity problem
Gabriel, S.A.
1995-05-30
NE/SQP is a recent algorithm that has proven quite effective for solving the pure and mixed forms of the nonlinear complementarity problem (NCP). NE/SQP is robust in the sense that its direction-finding subproblems are always solvable; in addition, the convergence rate of this method is Q-quadratic. In this paper the author considers a generalized version of NE/SQP proposed by Pang and Qi, that is suitable for the bounded NCP. The author extends their work by demonstrating a stronger convergence result and then tests a proposed method on several numerical problems.
Bruhn, Peter; Geyer-Schulz, Andreas
2002-01-01
In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.
Huang, Kuo -Ling; Mehrotra, Sanjay
2016-11-08
We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less
Huang, Kuo -Ling; Mehrotra, Sanjay
2016-11-08
We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadratic programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).
NASA Astrophysics Data System (ADS)
Kastner, R. E.
It is argued that Niels Bohr ultimately arrived at positivistic and antirealist-flavored statements because of weaknesses in his initial objective of accounting for measurement in physical terms. Bohr's investigative approach faced a dilemma, the choices being (i) conceptual inconsistency or (ii) taking the classical realm as primitive. In either case, Bohr's "Complementarity" does not adequately explain or account for the emergence of a oscopic, classical domain from a microscopic domain described by quantum mechanics. A diagnosis of the basic problem is offered, and an alternative way forward is indicated.
Linearization problem in pseudolite surveys
NASA Astrophysics Data System (ADS)
Cellmer, Slawomir; Rapinski, Jacek
2010-06-01
GPS augmented with pseudolites (PL), can be used in various engineering surveys. Also pseudolite—only navigation system can be designed and used in any place, even if GPS signal is not available (Kee et al. Development of indoor navigation system using asynchronous pseudolites, 1038-1045, 2000). Especially in engineering surveys, where harsh survey environment is common, pseudolites have a lot of applications. Pseudolites may be used in construction sites, open pit mines, city canyons, GPS and PL baseline processing is similar, although there are few differences that must be taken into account. One of the major issues is linearization problem. The source of the problem is neglecting second terms of Taylor series expansion in GPS baseline processing software. This problem occurs when the pseudolite is relatively close to the receiver, which is the case in PL surveys. In this paper authors presents the algorithm for GPS + PL data processing including, neglected in classical GPS only approach, second terms of Taylor series expansion. The mathematical model of adjustment problem, detailed proposal of application in baseline processing algorithms, and numerical tests are presented.
Can linear superiorization be useful for linear optimization problems?
NASA Astrophysics Data System (ADS)
Censor, Yair
2017-04-01
Linear superiorization (LinSup) considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are: (i) does LinSup provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? (ii) How does LinSup fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: ‘yes’ and ‘very well’, respectively.
Stochastic Linear Quadratic Optimal Control Problems
Chen, S.; Yong, J.
2001-07-01
This paper is concerned with the stochastic linear quadratic optimal control problem (LQ problem, for short) for which the coefficients are allowed to be random and the cost functional is allowed to have a negative weight on the square of the control variable. Some intrinsic relations among the LQ problem, the stochastic maximum principle, and the (linear) forward-backward stochastic differential equations are established. Some results involving Riccati equation are discussed as well.
Higher order sensitivity of solutions to convex programming problems without strict complementarity
NASA Technical Reports Server (NTRS)
Malanowski, Kazimierz
1988-01-01
Consideration is given to a family of convex programming problems which depend on a vector parameter. It is shown that the solutions of the problems and the associated Lagrange multipliers are arbitrarily many times directionally differentiable functions of the parameter, provided that the data of the problems are sufficiently regular. The characterizations of the respective derivatives are given.
Quantum Algorithm for Linear Programming Problems
NASA Astrophysics Data System (ADS)
Joag, Pramod; Mehendale, Dhananjay
The quantum algorithm (PRL 103, 150502, 2009) solves a system of linear equations with exponential speedup over existing classical algorithms. We show that the above algorithm can be readily adopted in the iterative algorithms for solving linear programming (LP) problems. The first iterative algorithm that we suggest for LP problem follows from duality theory. It consists of finding nonnegative solution of the equation forduality condition; forconstraints imposed by the given primal problem and for constraints imposed by its corresponding dual problem. This problem is called the problem of nonnegative least squares, or simply the NNLS problem. We use a well known method for solving the problem of NNLS due to Lawson and Hanson. This algorithm essentially consists of solving in each iterative step a new system of linear equations . The other iterative algorithms that can be used are those based on interior point methods. The same technique can be adopted for solving network flow problems as these problems can be readily formulated as LP problems. The suggested quantum algorithm cansolveLP problems and Network Flow problems of very large size involving millions of variables.
The linear separability problem: some testing methods.
Elizondo, D
2006-03-01
The notion of linear separability is used widely in machine learning research. Learning algorithms that use this concept to learn include neural networks (single layer perceptron and recursive deterministic perceptron), and kernel machines (support vector machines). This paper presents an overview of several of the methods for testing linear separability between two classes. The methods are divided into four groups: Those based on linear programming, those based on computational geometry, one based on neural networks, and one based on quadratic programming. The Fisher linear discriminant method is also presented. A section on the quantification of the complexity of classification problems is included.
The Vertical Linear Fractional Initialization Problem
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Hartley, Tom T.
1999-01-01
This paper presents a solution to the initialization problem for a system of linear fractional-order differential equations. The scalar problem is considered first, and solutions are obtained both generally and for a specific initialization. Next the vector fractional order differential equation is considered. In this case, the solution is obtained in the form of matrix F-functions. Some control implications of the vector case are discussed. The suggested method of problem solution is shown via an example.
The report discusses a two person max -min problem in which the maximizing player moves first and the minimizing player has perfect information of the...The joint constraints as well as the objective function are assumed to be linear. For this problem it is shown that the familiar inequality min max ...or = max min is reversed due to the influence of the joint constraints. The problem is characterized as a nonconvex program and a method of
Drinkers and Bettors: Investigating the Complementarity of Alcohol Consumption and Problem Gambling
Maclean, Johanna Catherine; Ettner, Susan L.
2009-01-01
Regulated gambling is a multi-billion dollar industry in the United States with greater than 100 percent increases in revenue over the past decade. Along with this rise in gambling popularity and gaming options comes an increased risk of addiction and the associated social costs. This paper focuses on the effect of alcohol use on gambling-related problems. Variables correlated with both alcohol use and gambling may be difficult to observe, and the inability to include these items in empirical models may bias coefficient estimates. After addressing the endogeneity of alcohol use when appropriate, we find strong evidence that problematic gambling and alcohol consumption are complementary activities. PMID:18430523
Piecewise linear approximation for hereditary control problems
NASA Technical Reports Server (NTRS)
Propst, Georg
1990-01-01
This paper presents finite-dimensional approximations for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems, when a quadratic cost integral must be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in the case where the cost integral ranges over a finite time interval, as well as in the case where it ranges over an infinite time interval. The arguments in the last case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense.
Linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, F. K. B.
1980-01-01
Problem involves design of controls for linear time-invariant system disturbed by white noise. Solution is Kalman filter coupled through set of optimal regulator gains to produce desired control signal. Key to solution is solving matrix Riccati differential equation. LSOCE effectively solves problem for wide range of practical applications. Program is written in FORTRAN IV for batch execution and has been implemented on IBM 360.
The linear regulator problem for parabolic systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Kunisch, K.
1983-01-01
An approximation framework is presented for computation (in finite imensional spaces) of Riccati operators that can be guaranteed to converge to the Riccati operator in feedback controls for abstract evolution systems in a Hilbert space. It is shown how these results may be used in the linear optimal regulator problem for a large class of parabolic systems.
Dynamics of Kepler problem with linear drag
NASA Astrophysics Data System (ADS)
Margheri, Alessandro; Ortega, Rafael; Rebelo, Carlota
2014-09-01
We study the dynamics of Kepler problem with linear drag. We prove that motions with nonzero angular momentum have no collisions and travel from infinity to the singularity. In the process, the energy takes all real values and the angular velocity becomes unbounded. We also prove that there are two types of linear motions: capture-collision and ejection-collision. The behaviour of solutions at collisions is the same as in the conservative case. Proofs are obtained using the geometric theory of ordinary differential equations and two regularizations for the singularity of Kepler problem equation. The first, already considered in Diacu (Celest Mech Dyn Astron 75:1-15, 1999), is mainly used for the study of the linear motions. The second, the well known Levi-Civita transformation, allows to complete the study of the asymptotic values of the energy and to prove the existence of collision solutions with arbitrary energy.
Piecewise linear approximation for hereditary control problems
NASA Technical Reports Server (NTRS)
Propst, Georg
1987-01-01
Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.
NASA Astrophysics Data System (ADS)
Leonard, Aline; Beriaux, Emilie; Defourny, Pierre
2013-12-01
This paper presents results of LAI estimation from multi-polarimetric SAR data assessed for maize and winter wheat crop. Taking advantage of a large multi- year data set of RADARSAT-2 and ground observations collected in Belgium and in The Netherlands, this research aims at improving a method that takes benefits of all linear polarizations to optimize the LAI estimation. The semi-empirical Water Cloud Model (WCM) is implemented to derive maize and winter wheat LAI values from each linear polarization. The cross-polarization and the VV polarization were found the most relevant polarization to retrieve maize and wheat LAI through this model. A combination of the retrieved LAI and their associated errors for each polarization is then computed to improve the LAI estimation.
Numerical stability in problems of linear algebra.
NASA Technical Reports Server (NTRS)
Babuska, I.
1972-01-01
Mathematical problems are introduced as mappings from the space of input data to that of the desired output information. Then a numerical process is defined as a prescribed recurrence of elementary operations creating the mapping of the underlying mathematical problem. The ratio of the error committed by executing the operations of the numerical process (the roundoff errors) to the error introduced by perturbations of the input data (initial error) gives rise to the concept of lambda-stability. As examples, several processes are analyzed from this point of view, including, especially, old and new processes for solving systems of linear algebraic equations with tridiagonal matrices. In particular, it is shown how such a priori information can be utilized as, for instance, a knowledge of the row sums of the matrix. Information of this type is frequently available where the system arises in connection with the numerical solution of differential equations.
Numerical stability in problems of linear algebra.
NASA Technical Reports Server (NTRS)
Babuska, I.
1972-01-01
Mathematical problems are introduced as mappings from the space of input data to that of the desired output information. Then a numerical process is defined as a prescribed recurrence of elementary operations creating the mapping of the underlying mathematical problem. The ratio of the error committed by executing the operations of the numerical process (the roundoff errors) to the error introduced by perturbations of the input data (initial error) gives rise to the concept of lambda-stability. As examples, several processes are analyzed from this point of view, including, especially, old and new processes for solving systems of linear algebraic equations with tridiagonal matrices. In particular, it is shown how such a priori information can be utilized as, for instance, a knowledge of the row sums of the matrix. Information of this type is frequently available where the system arises in connection with the numerical solution of differential equations.
Menu-Driven Solver Of Linear-Programming Problems
NASA Technical Reports Server (NTRS)
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
On the linearization problem for ultraspherical polynomials
NASA Astrophysics Data System (ADS)
Bassetti, B.; Montaldi, E.; Raciti, M.
1986-03-01
A direct proof of a formula established by Bressoud in 1981 [D. M. Bressoud, SIAM J. Math. Anal. 12, 161 (1981)], equivalent to the linearization formula for the ultraspherical polynomials, is given. Some related results are briefly discussed.
A multistage linear array assignment problem
NASA Technical Reports Server (NTRS)
Nicol, David M.; Shier, D. R.; Kincaid, R. K.; Richards, D. S.
1988-01-01
The implementation of certain algorithms on parallel processing computing architectures can involve partitioning contiguous elements into a fixed number of groups, each of which is to be handled by a single processor. It is desired to find an assignment of elements to processors that minimizes the sum of the maximum workloads experienced at each stage. This problem can be viewed as a multi-objective network optimization problem. Polynomially-bounded algorithms are developed for the case of two stages, whereas the associated decision problem (for an arbitrary number of stages) is shown to be NP-complete. Heuristic procedures are therefore proposed and analyzed for the general problem. Computational experience with one of the exact problems, incorporating certain pruning rules, is presented with one of the exact problems. Empirical results also demonstrate that one of the heuristic procedures is especially effective in practice.
Complementarity, Sets and Numbers
ERIC Educational Resources Information Center
Otte, M.
2003-01-01
Niels Bohr's term "complementarity" has been used by several authors to capture the essential aspects of the cognitive and epistemological development of scientific and mathematical concepts. In this paper we will conceive of complementarity in terms of the dual notions of extension and intension of mathematical terms. A complementarist approach…
Complementarity, Sets and Numbers
ERIC Educational Resources Information Center
Otte, M.
2003-01-01
Niels Bohr's term "complementarity" has been used by several authors to capture the essential aspects of the cognitive and epistemological development of scientific and mathematical concepts. In this paper we will conceive of complementarity in terms of the dual notions of extension and intension of mathematical terms. A complementarist approach…
Linear inverse problem of the reactor dynamics
NASA Astrophysics Data System (ADS)
Volkov, N. P.
2017-01-01
The aim of this work is the study transient processes in nuclear reactors. The mathematical model of the reactor dynamics excluding reverse thermal coupling is investigated. This model is described by a system of integral-differential equations, consisting of a non-stationary anisotropic multispeed kinetic transport equation and a delayed neutron balance equation. An inverse problem was formulated to determine the stationary part of the function source along with the solution of the direct problem. The author obtained sufficient conditions for the existence and uniqueness of a generalized solution of this inverse problem.
The stability problem for linear multistep methods
NASA Astrophysics Data System (ADS)
Aceto, L.; Trigiante, D.
2007-12-01
The paper reviews results on rigorous proofs for stability properties of classes of linear multistep methods (LMMs) used either as IVMs or as BVMs. The considered classes are not only the well-known classical ones (BDF, Adams, ...) along with their BVM correspondent, but also those which were considered unstable as IVMs, but stable as BVMs. Among the latter we find two classes which deserve attention because of their peculiarity: the TOMs (top order methods) which have the highest order allowed to a LMM and the Bs-LMMs which have the property to carry with each method its natural continuous extension.
Symmetry Groups for Linear Programming Relaxations of Orthogonal Array Problems
2015-03-26
Symmetry Groups for Linear Programming Relaxations of Orthogonal Array Problems THESIS MARCH 2015 David M. Arquette, Second Lieutenant, USAF AFIT-ENC...work of the U.S. Government and is not subject to copyright protection in the United States. AFIT-ENC-MS-15-M-003 SYMMETRY GROUPS FOR LINEAR...PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENC-MS-15-M-003 SYMMETRY GROUPS FOR LINEAR PROGRAMMING RELAXATIONS OF ORTHOGONAL ARRAY PROBLEMS David M
A piecewise linear approximation scheme for hereditary optimal control problems
NASA Technical Reports Server (NTRS)
Cliff, E. M.; Burns, J. A.
1977-01-01
An approximation scheme based on 'piecewise linear' approximations of L2 spaces is employed to formulate a numerical method for solving quadratic optimal control problems governed by linear retarded functional differential equations. This piecewise linear method is an extension of the so called averaging technique. It is shown that the Riccati equation for the linear approximation is solved by simple transformation of the averaging solution. Thus, the computational requirements are essentially the same. Numerical results are given.
A piecewise linear approximation scheme for hereditary optimal control problems
NASA Technical Reports Server (NTRS)
Cliff, E. M.; Burns, J. A.
1977-01-01
An approximation scheme based on 'piecewise linear' approximations of L2 spaces is employed to formulate a numerical method for solving quadratic optimal control problems governed by linear retarded functional differential equations. This piecewise linear method is an extension of the so called averaging technique. It is shown that the Riccati equation for the linear approximation is solved by simple transformation of the averaging solution. Thus, the computational requirements are essentially the same. Numerical results are given.
NASA Astrophysics Data System (ADS)
Howard, Don
2013-04-01
Complementarity is Niels Bohr's most original contribution to the interpretation of quantum mechanics, but there is widespread confusion about complementarity in the popular literature and even in some of the serious scholarly literature on Bohr. This talk provides a historically grounded guide to Bohr's own understanding of the doctrine, emphasizing the manner in which complementarity is deeply rooted in the physics of the quantum world, in particular the physics of entanglement, and is, therefore, not just an idiosyncratic philosophical addition. Among the more specific points to be made are that complementarity is not to be confused with wave-particle duality, that it is importantly different from Heisenberg's idea of observer-induced limitations on measurability, and that it is in no way an expression of a positivist philosophical project.
Singular linear-quadratic control problem for systems with linear delay
Sesekin, A. N.
2013-12-18
A singular linear-quadratic optimization problem on the trajectories of non-autonomous linear differential equations with linear delay is considered. The peculiarity of this problem is the fact that this problem has no solution in the class of integrable controls. To ensure the existence of solutions is required to expand the class of controls including controls with impulse components. Dynamical systems with linear delay are used to describe the motion of pantograph from the current collector with electric traction, biology, etc. It should be noted that for practical problems fact singularity criterion of quality is quite commonly occurring, and therefore the study of these problems is surely important. For the problem under discussion optimal programming control contained impulse components at the initial and final moments of time is constructed under certain assumptions on the functional and the right side of the control system.
Boundary control problem of linear Stokes equation with point observations
Ding, Z.
1994-12-31
We will discuss the linear quadratic regulator problems (LQR) of linear Stokes system with point observations on the boundary and box constraints on the boundary control. By using hydropotential theory, we proved that the LQR problems without box constraint on the control do not admit any non trivial solution, while the LQR problems with box constraints have a unique solution. The optimal control is given explicitly, and its singular behaviors are displayed explicitly through a decomposition formula. Based upon the characteristic formula of the optimal control, a generic numerical algorithm is given for solving the box constrained LQR problems.
Multisplitting for linear, least squares and nonlinear problems
Renaut, R.
1996-12-31
In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.
Experiences with linear solvers for oil reservoir simulation problems
Joubert, W.; Janardhan, R.; Biswas, D.; Carey, G.
1996-12-31
This talk will focus on practical experiences with iterative linear solver algorithms used in conjunction with Amoco Production Company`s Falcon oil reservoir simulation code. The goal of this study is to determine the best linear solver algorithms for these types of problems. The results of numerical experiments will be presented.
Linear Programming and Its Application to Pattern Recognition Problems
NASA Technical Reports Server (NTRS)
Omalley, M. J.
1973-01-01
Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
On the Displacement Problem of Plane Linear Elastostatics
NASA Astrophysics Data System (ADS)
Russo, R.
2010-09-01
We consider the displacement problem of linear elastostatics in a Lipschitz exterior domain of R2. We prove that if the boundary datum a lies in L2(∂Ω), then the problem has a unique very weak solution which converges to an assigned constant vector u∞ at infinity if and if a and u∞ satisfy a suitable compatibility condition.
Solving linear integer programming problems by a novel neural model.
Cavalieri, S
1999-02-01
The paper deals with integer linear programming problems. As is well known, these are extremely complex problems, even when the number of integer variables is quite low. Literature provides examples of various methods to solve such problems, some of which are of a heuristic nature. This paper proposes an alternative strategy based on the Hopfield neural network. The advantage of the strategy essentially lies in the fact that hardware implementation of the neural model allows for the time required to obtain a solution so as not depend on the size of the problem to be solved. The paper presents a particular class of integer linear programming problems, including well-known problems such as the Travelling Salesman Problem and the Set Covering Problem. After a brief description of this class of problems, it is demonstrated that the original Hopfield model is incapable of supplying valid solutions. This is attributed to the presence of constant bias currents in the dynamic of the neural model. A demonstration of this is given and then a novel neural model is presented which continues to be based on the same architecture as the Hopfield model, but introduces modifications thanks to which the integer linear programming problems presented can be solved. Some numerical examples and concluding remarks highlight the solving capacity of the novel neural model.
Efficient numerical methods for entropy-linear programming problems
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Gasnikova, E. B.; Nesterov, Yu. E.; Chernov, A. V.
2016-04-01
Entropy-linear programming (ELP) problems arise in various applications. They are usually written as the maximization of entropy (minimization of minus entropy) under affine constraints. In this work, new numerical methods for solving ELP problems are proposed. Sharp estimates for the convergence rates of the proposed methods are established. The approach described applies to a broader class of minimization problems for strongly convex functionals with affine constraints.
Local regularization of linear inverse problems via variational filtering
NASA Astrophysics Data System (ADS)
Lamm, Patricia K.
2017-08-01
We develop local regularization methods for ill-posed linear inverse problems governed by general Fredholm integral operators. The methods are executed as filtering algorithms which are simple to implement and computationally efficient for a large class of problems. We establish a convergence theory and give convergence rates for such methods, and illustrate their computational speed in numerical tests for inverse problems in geomagnetic exploration and imaging.
Liang, X B; Si, J
2001-01-01
This paper investigates the existence, uniqueness, and global exponential stability (GES) of the equilibrium point for a large class of neural networks with globally Lipschitz continuous activations including the widely used sigmoidal activations and the piecewise linear activations. The provided sufficient condition for GES is mild and some conditions easily examined in practice are also presented. The GES of neural networks in the case of locally Lipschitz continuous activations is also obtained under an appropriate condition. The analysis results given in the paper extend substantially the existing relevant stability results in the literature, and therefore expand significantly the application range of neural networks in solving optimization problems. As a demonstration, we apply the obtained analysis results to the design of a recurrent neural network (RNN) for solving the linear variational inequality problem (VIP) defined on any nonempty and closed box set, which includes the box constrained quadratic programming and the linear complementarity problem as the special cases. It can be inferred that the linear VIP has a unique solution for the class of Lyapunov diagonally stable matrices, and that the synthesized RNN is globally exponentially convergent to the unique solution. Some illustrative simulation examples are also given.
Singular linear quadratic control problem for systems with linear and constant delay
NASA Astrophysics Data System (ADS)
Sesekin, A. N.; Andreeva, I. Yu.; Shlyakhov, A. S.
2016-12-01
This article is devoted to the singular linear-quadratic optimization problem on the trajectories of the linear non-autonomous system of differential equations with linear and constant delay. It should be noted that such task does not solve the class of integrable controls, so to ensure the existence of a solution is needed to expand the class of controls to include the control impulse components. For the problem under consideration, we have built program control containing impulse components in the initial and final moments time. This is done under certain assumptions on the functional and the right side of the control system.
An Algorithm for Linearly Constrained Nonlinear Programming Programming Problems.
1980-01-01
ALGORITHM FOR LINEARLY CONSTRAINED NONLINEAR PROGRAMMING PROBLEMS Mokhtar S. Bazaraa and Jamie J. Goode In this paper an algorithm for solving a linearly...distance pro- gramr.ing, as in the works of Bazaraa and Goode 12], and Wolfe [16 can be used for solving this problem. Special methods that take advantage of...34 Pacific Journal of Mathematics, Volume 16, pp. 1-3, 1966. 2. M. S. Bazaraa and J. j. Goode, "An Algorithm for Finding the Shortest Element of a
Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem
NASA Astrophysics Data System (ADS)
Rahmalia, Dinita
2017-08-01
Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.
Linear decomposition approach for a class of nonconvex programming problems.
Shen, Peiping; Wang, Chunfeng
2017-01-01
This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
2014-06-19
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
NASA Astrophysics Data System (ADS)
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
2014-06-01
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α-. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen's method is employed to find a compromise solution, supported by illustrative numerical example.
A nonlinear complementarity approach for the national energy modeling system
Gabriel, S.A.; Kydes, A.S.
1995-03-08
The National Energy Modeling System (NEMS) is a large-scale mathematical model that computes equilibrium fuel prices and quantities in the U.S. energy sector. At present, to generate these equilibrium values, NEMS sequentially solves a collection of linear programs and nonlinear equations. The NEMS solution procedure then incorporates the solutions of these linear programs and nonlinear equations in a nonlinear Gauss-Seidel approach. The authors describe how the current version of NEMS can be formulated as a particular nonlinear complementarity problem (NCP), thereby possibly avoiding current convergence problems. In addition, they show that the NCP format is equally valid for a more general form of NEMS. They also describe several promising approaches for solving the NCP form of NEMS based on recent Newton type methods for general NCPs. These approaches share the feature of needing to solve their direction-finding subproblems only approximately. Hence, they can effectively exploit the sparsity inherent in the NEMS NCP.
Unique radiation problems associated with the SLAC Linear Collider
Jenkins, T.M.; Nelson, W.R.
1987-01-01
The SLAC Linear Collider (SLC) is a variation of a new class of linear colliders whereby two linear accelerators are aimed at each other to collide intense bunches of electrons and positrons together. Conventional storage rings are becoming ever more costly as the energy of the stored beams increases such that the cost of two linear colliders per GeV is less than that of electron-positron storage rings at cm energies above about 100 GeV. The SLC being built at SLAC is designed to achieve a center-of-mass energy of 100 GeV by accelerating intense bunches of particles, both electrons and positrons, in the SLAC linac and transporting them along two different arcs to a point where they are focused to a small radius and made to collide head on. The SLC has two main goals. The first is to develop the physics and technology of linear colliders. The other is to achieve center-of-mass energies above 90 GeV in order to investigate the unification of the weak and electromagnetic interactions in the energy range above 90 GeV; (i.e., Z/sup 0/, etc.). This note discusses a few of the special problems that were encountered by the Radiation Physics group at SLAC during the design and construction of the SLAC Linear Collider. The nature of these problems is discussed along with the methods employed to solve them.
Hierarchical Multiobjective Linear Programming Problems with Fuzzy Domination Structures
NASA Astrophysics Data System (ADS)
Yano, Hitoshi
2010-10-01
In this paper, we focus on hierarchical multiobjective linear programming problems with fuzzy domination structures where multiple decision makers in a hierarchical organization have their own multiple objective linear functions together with common linear constraints. After introducing decision powers and the solution concept based on the α-level set for the fuzzy convex cone Λ which reflects a fuzzy domination structure, we propose a fuzzy approach to obtain a satisfactory solution which reflects not only the hierarchical relationships between multiple decision makers but also their own preferences for their membership functions. In the proposed method, instead of Pareto optimal concept, a generalized Λ˜α-extreme point concept is introduced. In order to obtain a satisfactory solution from among a generalized Λ˜α-extreme point set, an interactive algorithm based on linear programming is proposed, and an interactive processes are demonstrated by means of an illustrative numerical example.
Complementarity and stability conditions
NASA Astrophysics Data System (ADS)
Georgi, Howard
2017-08-01
We discuss the issue of complementarity between the confining phase and the Higgs phase for gauge theories in which there are no light particles below the scale of confinement or spontaneous symmetry breaking. We show with a number of examples that even though the low energy effective theories are the same (and trivial), discontinuous changes in the structure of heavy stable particles can signal a phase transition and thus we can sometimes argue that two phases which have different structures of heavy particles that cannot be continuously connected and thus the phases cannot be complementary. We discuss what this means and suggest that such ;stability conditions; can be a useful physical check for complementarity.
Sparse stochastic processes and discretization of linear inverse problems.
Bostan, Emrah; Kamilov, Ulugbek S; Nilchian, Masih; Unser, Michael
2013-07-01
We present a novel statistically-based discretization paradigm and derive a class of maximum a posteriori (MAP) estimators for solving ill-conditioned linear inverse problems. We are guided by the theory of sparse stochastic processes, which specifies continuous-domain signals as solutions of linear stochastic differential equations. Accordingly, we show that the class of admissible priors for the discretized version of the signal is confined to the family of infinitely divisible distributions. Our estimators not only cover the well-studied methods of Tikhonov and l1-type regularizations as particular cases, but also open the door to a broader class of sparsity-promoting regularization schemes that are typically nonconvex. We provide an algorithm that handles the corresponding nonconvex problems and illustrate the use of our formalism by applying it to deconvolution, magnetic resonance imaging, and X-ray tomographic reconstruction problems. Finally, we compare the performance of estimators associated with models of increasing sparsity.
On the linear properties of the nonlinear radiative transfer problem
NASA Astrophysics Data System (ADS)
Pikichyan, H. V.
2016-11-01
In this report, we further expose the assertions made in nonlinear problem of reflection/transmission of radiation from a scattering/absorbing one-dimensional anisotropic medium of finite geometrical thickness, when both of its boundaries are illuminated by intense monochromatic radiative beams. The new conceptual element of well-defined, so-called, linear images is noteworthy. They admit a probabilistic interpretation. In the framework of nonlinear problem of reflection/transmission of radiation, we derive solution which is similar to linear case. That is, the solution is reduced to the linear combination of linear images. By virtue of the physical meaning, these functions describe the reflectivity and transmittance of the medium for a single photon or their beam of unit intensity, incident on one of the boundaries of the layer. Thereby the medium in real regime is still under the bilateral illumination by external exciting radiation of arbitrary intensity. To determine the linear images, we exploit three well known methods of (i) adding of layers, (ii) its limiting form, described by differential equations of invariant imbedding, and (iii) a transition to the, so-called, functional equations of the "Ambartsumyan's complete invariance".
Towards an ideal preconditioner for linearized Navier-Stokes problems
Murphy, M.F.
1996-12-31
Discretizing certain linearizations of the steady-state Navier-Stokes equations gives rise to nonsymmetric linear systems with indefinite symmetric part. We show that for such systems there exists a block diagonal preconditioner which gives convergence in three GMRES steps, independent of the mesh size and viscosity parameter (Reynolds number). While this {open_quotes}ideal{close_quotes} preconditioner is too expensive to be used in practice, it provides a useful insight into the problem. We then consider various approximations to the ideal preconditioner, and describe the eigenvalues of the preconditioned systems. Finally, we compare these preconditioners numerically, and present our conclusions.
Diffusion LMS for Multitask Problems With Local Linear Equality Constraints
NASA Astrophysics Data System (ADS)
Nassif, Roula; Richard, Cedric; Ferrari, Andre; Sayed, Ali H.
2017-10-01
We consider distributed multitask learning problems over a network of agents where each agent is interested in estimating its own parameter vector, also called task, and where the tasks at neighboring agents are related according to a set of linear equality constraints. Each agent possesses its own convex cost function of its parameter vector and a set of linear equality constraints involving its own parameter vector and the parameter vectors of its neighboring agents. We propose an adaptive stochastic algorithm based on the projection gradient method and diffusion strategies in order to allow the network to optimize the individual costs subject to all constraints. Although the derivation is carried out for linear equality constraints, the technique can be applied to other forms of convex constraints. We conduct a detailed mean-square-error analysis of the proposed algorithm and derive closed-form expressions to predict its learning behavior. We provide simulations to illustrate the theoretical findings. Finally, the algorithm is employed for solving two problems in a distributed manner: a minimum-cost flow problem over a network and a space-time varying field reconstruction problem.
A modular hierarchy-based theory of the chemical origins of life based on molecular complementarity.
Root-Bernstein, Robert
2012-12-18
complementarity plays critical roles in the evolution of chemical systems and resolves a significant number of outstanding problems in the emergence of complex systems. All physical and mathematical models of organization within complex systems rely upon nonrandom linkage between components. Molecular complementarity provides a naturally occurring nonrandom linker. More importantly, the formation of hierarchically organized stable modules vastly improves the probability of achieving self-organization, and molecular complementarity provides a mechanism by which hierarchically organized stable modules can form. Finally, modularity based on molecular complementarity produces a means for storing and replicating information. Linear replicating molecules such as DNA or RNA are not required to transmit information from one generation of compounds to the next: compositional replication is as ubiquitous in living systems as genetic replication and is equally important to its functions. Chemical systems composed of complementary modules mediate this compositional replication and gave rise to linear replication schemes. In sum, I propose that molecular complementarity is ubiquitous in living systems because it provides the physicochemical basis for modular, hierarchical ordering and replication necessary for the evolution of the chemical systems upon which life is based. I conjecture that complementarity more generally is an essential agent that mediates evolution at every level of organization.
Geodetic linear estimation technique and the norm choice problem
NASA Technical Reports Server (NTRS)
Dermanis, A.
1977-01-01
In this work the mathematical and probabilistic background of standard linear estimation techniques used in geodesy is clarified, and their interrelationship is revealed with the help of best approximation theory and the normal equations. Emphasis is given to the separation of the deterministic solution to the approximation problem from the probabilistic justification of the metric of the approximation. Least squares prediction has been related to deterministic (exact) collocation, and minimum error bound has been identified as a prediction optimality criterion in the latter. Criteria for the optimal choice of norm in Hilbert space collocation are proposed for gravimetric geodesy problems.
An analytically solvable eigenvalue problem for the linear elasticity equations.
Day, David Minot; Romero, Louis Anthony
2004-07-01
Analytic solutions are useful for code verification. Structural vibration codes approximate solutions to the eigenvalue problem for the linear elasticity equations (Navier's equations). Unfortunately the verification method of 'manufactured solutions' does not apply to vibration problems. Verification books (for example [2]) tabulate a few of the lowest modes, but are not useful for computations of large numbers of modes. A closed form solution is presented here for all the eigenvalues and eigenfunctions for a cuboid solid with isotropic material properties. The boundary conditions correspond physically to a greased wall.
Private algebras in quantum information and infinite-dimensional complementarity
Crann, Jason; Kribs, David W.; Levene, Rupert H.; Todorov, Ivan G.
2016-01-15
We introduce a generalized framework for private quantum codes using von Neumann algebras and the structure of commutants. This leads naturally to a more general notion of complementary channel, which we use to establish a generalized complementarity theorem between private and correctable subalgebras that applies to both the finite and infinite-dimensional settings. Linear bosonic channels are considered and specific examples of Gaussian quantum channels are given to illustrate the new framework together with the complementarity theorem.
Positional and impulse strategies for linear problems of motion correction
NASA Astrophysics Data System (ADS)
Ananyev, B. I.; Gredasova, N. V.
2016-12-01
Control problems for a linear system with incomplete information are considered. It is supposed that a linear signal with an additive noise is observed. This noise along with the disturbances in the state equation is bounded by the quadratic constraints. The control action in the state equation may be contained in a compact set. In the second case, the total variation of the control is restricted. This case leads us to a sequence of impulse control actions (delta-functions). For both cases, we obtain the definite relations for optimal control actions that guarantee the minimax value of the terminal functional. We use methods of the control theory under uncertainty and the dynamic programming. Some examples from the theory of the movement of space and flight vehicles are investigated.
An Algorithm for Solving Interval Linear Programming Problems
1974-11-01
34regularized" a lä Chames -Cooper so that infeasibility is determined at optimal solution if that is the case. If I(x*(v)) - 0 then x*(v) is an... Chames and Cooper J3]) may be used to compute the new inverse. Theorem 2 The algorithm described above terminates in a finite number of steps...I J 19- REFERENCES 1) A. Ben-Israel and A. Chames , "An Explicit Solution of A Special Class of Linear Programming Problems", Operations
Extracting Embedded Generalized Networks from Linear Programming Problems.
1984-09-01
E EXTRACTING EMBEDDED GENERALIZED NETWORKS FROM LINEAR PROGRAMMING PROBLEMS by Gerald G. Brown * . ___Richard D. McBride * R. Kevin Wood LcL7...authorized. EA Gerald ’Brown Richar-rD. McBride 46;val Postgrduate School University of Southern California Monterey, California 93943 Los Angeles...REOT UBE . OV S.SF- PERFOING’ CAORG soN UER. 7. AUTNOR(a) S. CONTRACT ON GRANT NUME111() Gerald G. Brown Richard D. McBride S. PERFORMING ORGANIZATION
A recurrent neural network for solving bilevel linear programming problem.
He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian
2014-04-01
In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.
Rees algebras, Monomial Subrings and Linear Optimization Problems
NASA Astrophysics Data System (ADS)
Dupont, Luis A.
2010-06-01
In this thesis we are interested in studying algebraic properties of monomial algebras, that can be linked to combinatorial structures, such as graphs and clutters, and to optimization problems. A goal here is to establish bridges between commutative algebra, combinatorics and optimization. We study the normality and the Gorenstein property-as well as the canonical module and the a-invariant-of Rees algebras and subrings arising from linear optimization problems. In particular, we study algebraic properties of edge ideals and algebras associated to uniform clutters with the max-flow min-cut property or the packing property. We also study algebraic properties of symbolic Rees algebras of edge ideals of graphs, edge ideals of clique clutters of comparability graphs, and Stanley-Reisner rings.
Optimized constraints for the linearized geoacoustic inverse problem.
Ballard, Megan S; Becker, Kyle M
2011-02-01
A geoacoustic inversion scheme to estimate the depth-dependent sound speed characteristics of the shallow-water waveguide is presented. The approach is based on the linearized perturbative technique developed by Rajan et al. [J. Acoust. Soc. Am. 82, 998-1017 (1987)]. This method is applied by assuming a background starting model for the environment that includes both the water column and the seabed. Typically, the water column properties are assumed to be known and held fixed in the inversion. Successful application of the perturbative inverse technique lies in handling issues of stability and uniqueness associated with solving a discrete ill-posed problem. Conventionally, such problems are regularized, a procedure which results in a smooth solution. Past applications of this inverse technique have been restricted to cases for which the water column sound speed profile was known and sound speed in the seabed could be approximated by a smooth profile. In this work, constraints that are better suited to specific aspects of the geoacoustic inverse problem are applied. These techniques expand on the original application of the perturbative inverse technique by including the water column sound speed profile in the solution and by allowing for discontinuities in the seabed sound speed profile.
Linearization of the boundary-layer equations of the minimum time-to-climb problem
NASA Technical Reports Server (NTRS)
Ardema, M. D.
1979-01-01
Ardema (1974) has formally linearized the two-point boundary value problem arising from a general optimal control problem, and has reviewed the known stability properties of such a linear system. In the present paper, Ardema's results are applied to the minimum time-to-climb problem. The linearized zeroth-order boundary layer equations of the problem are derived and solved.
First integrals for the Kepler problem with linear drag
NASA Astrophysics Data System (ADS)
Margheri, Alessandro; Ortega, Rafael; Rebelo, Carlota
2017-01-01
In this work we consider the Kepler problem with linear drag, and prove the existence of a continuous vector-valued first integral, obtained taking the limit as t→ +∞ of the Runge-Lenz vector. The norm of this first integral can be interpreted as an asymptotic eccentricity e_{∞} with 0≤ e_{∞} ≤ 1. The orbits satisfying e_{∞} <1 approach the singularity by an elliptic spiral and the corresponding solutions x(t)=r(t)e^{iθ (t)} have a norm r( t) that goes to zero like a negative exponential and an argument θ (t) that goes to infinity like a positive exponential. In particular, the difference between consecutive times of passage through the pericenter, say T_{n+1} -T_n, goes to zero as 1/n.
Using parallel banded linear system solvers in generalized eigenvalue problems
NASA Technical Reports Server (NTRS)
Zhang, Hong; Moss, William F.
1993-01-01
Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.
Status Report: Black Hole Complementarity Controversy
NASA Astrophysics Data System (ADS)
Lee, Bum-Hoon; Yeom, Dong-han
2014-01-01
Black hole complementarity was a consensus among string theorists for the interpretation of the information loss problem. However, recently some authors find inconsistency of black hole complementarity: large N rescaling and Almheiri, Marolf, Polchinski and Sully (AMPS) argument. According to AMPS, the horizon should be a firewall so that one cannot penetrate there for consistency. There are some controversial discussions on the firewall. Apart from these papers, the authors suggest an assertion using a semi-regular black hole model and we conclude that the firewall, if it exists, should affect to asymptotic observer. In addition, if any opinion does not consider the duplication experiment and the large N rescaling, then the argument is difficult to accept.
The Afshar Experiment and Complementarity
NASA Astrophysics Data System (ADS)
Kastner, Ruth
2006-03-01
A modified version of Young's experiment by Shahriar Afshar demonstrates that, prior to what appears to be a ``which-way'' measurement, an interference pattern exists. Afshar has claimed that this result constitutes a violation of the Principle of Complementarity. This paper discusses the implications of this experiment and considers how Cramer's Transactional Interpretation easily accomodates the result. It is also shown that the Afshar experiment is isomorphic in key respects to a spin one-half particle prepared as ``spin up along x'' and post- selected in a specific state of spin along z. The terminology ``which way'' or ``which-slit'' is critiqued; it is argued that this usage by both Afshar and his critics is misleading and has contributed to confusion surrounding the interpretation of the experiment. Nevertheless, it is concluded that Bohr would have had no more problem accounting for the Afshar result than he would in accounting for the aforementioned pre- and post- selection spin experiment, in which the particle's preparation state is confirmed by a nondestructive measurement prior to post-selection. In addition, some new inferences about the interpretation of delayed choice experiments are drawn from the analysis.
The intelligence of dual simplex method to solve linear fractional fuzzy transportation problem.
Narayanamoorthy, S; Kalyani, S
2015-01-01
An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example.
The Intelligence of Dual Simplex Method to Solve Linear Fractional Fuzzy Transportation Problem
Narayanamoorthy, S.; Kalyani, S.
2015-01-01
An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example. PMID:25810713
Multigrid approaches to non-linear diffusion problems on unstructured meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
The efficiency of three multigrid methods for solving highly non-linear diffusion problems on two-dimensional unstructured meshes is examined. The three multigrid methods differ mainly in the manner in which the nonlinearities of the governing equations are handled. These comprise a non-linear full approximation storage (FAS) multigrid method which is used to solve the non-linear equations directly, a linear multigrid method which is used to solve the linear system arising from a Newton linearization of the non-linear system, and a hybrid scheme which is based on a non-linear FAS multigrid scheme, but employs a linear solver on each level as a smoother. Results indicate that all methods are equally effective at converging the non-linear residual in a given number of grid sweeps, but that the linear solver is more efficient in cpu time due to the lower cost of linear versus non-linear grid sweeps.
Algebraic complementarity in quantum theory
Petz, Denes
2010-01-15
This paper is an overview of the concept of complementarity, the relation to state estimation, to Connes-Stoermer conditional (or relative) entropy, and to uncertainty relation. Complementary Abelian and noncommutative subalgebras are analyzed. All the known results about complementary decompositions are described and several open questions are included. The paper contains only few proofs, typically references are given.
Fixed Point Problems for Linear Transformations on Pythagorean Triples
ERIC Educational Resources Information Center
Zhan, M.-Q.; Tong, J.-C.; Braza, P.
2006-01-01
In this article, an attempt is made to find all linear transformations that map a standard Pythagorean triple (a Pythagorean triple [x y z][superscript T] with y being even) into a standard Pythagorean triple, which have [3 4 5][superscript T] as their fixed point. All such transformations form a monoid S* under matrix product. It is found that S*…
Fixed Point Problems for Linear Transformations on Pythagorean Triples
ERIC Educational Resources Information Center
Zhan, M.-Q.; Tong, J.-C.; Braza, P.
2006-01-01
In this article, an attempt is made to find all linear transformations that map a standard Pythagorean triple (a Pythagorean triple [x y z][superscript T] with y being even) into a standard Pythagorean triple, which have [3 4 5][superscript T] as their fixed point. All such transformations form a monoid S* under matrix product. It is found that S*…
LP Relaxation of the Potts Labeling Problem Is as Hard as any Linear Program.
Prusa, Daniel; Werner, Tomas
2016-06-20
In our recent work, we showed that solving the LP relaxation of the pairwise min-sum labeling problem (also known as MAP inference in graphical models or discrete energy minimization) is not much easier than solving any linear program. Precisely, the general linear program reduces in linear time (assuming the Turing model of computation) to the LP relaxation of the min-sum labeling problem. The reduction is possible, though in quadratic time, even to the min-sum labeling problem with planar structure. Here we prove similar results for the pairwise minsum labeling problem with attractive Potts interactions (also known as the uniform metric labeling problem).
A linear regression solution to the spatial autocorrelation problem
NASA Astrophysics Data System (ADS)
Griffith, Daniel A.
The Moran Coefficient spatial autocorrelation index can be decomposed into orthogonal map pattern components. This decomposition relates it directly to standard linear regression, in which corresponding eigenvectors can be used as predictors. This paper reports comparative results between these linear regressions and their auto-Gaussian counterparts for the following georeferenced data sets: Columbus (Ohio) crime, Ottawa-Hull median family income, Toronto population density, southwest Ohio unemployment, Syracuse pediatric lead poisoning, and Glasgow standard mortality rates, and a small remotely sensed image of the High Peak district. This methodology is extended to auto-logistic and auto-Poisson situations, with selected data analyses including percentage of urban population across Puerto Rico, and the frequency of SIDs cases across North Carolina. These data analytic results suggest that this approach to georeferenced data analysis offers considerable promise.
The acoustics of a concert hall as a linear problem
NASA Astrophysics Data System (ADS)
Lokki, Tapio; Pätynen, Jukka
2015-01-01
The main purpose of a concert hall is to convey sound from musicians to listeners and to reverberate the music for more pleasant experience in the audience area. This process is linear and can be represented with impulse responses. However, by studying measured and simulated impulse responses for decades, researchers have not been able to exhaustively explain the success and reputation of certain concert halls.
Method for Solving Physical Problems Described by Linear Differential Equations
NASA Astrophysics Data System (ADS)
Belyaev, B. A.; Tyurnev, V. V.
2017-01-01
A method for solving physical problems is suggested in which the general solution of a differential equation in partial derivatives is written in the form of decomposition in spherical harmonics with indefinite coefficients. Values of these coefficients are determined from a comparison of the decomposition with a solution obtained for any simplest particular case of the examined problem. The efficiency of the method is demonstrated on an example of calculation of electromagnetic fields generated by a current-carrying circular wire. The formulas obtained can be used to analyze paths in the near-field magnetic (magnetically inductive) communication systems working in moderately conductive media, for example, in sea water.
From a Nonlinear, Nonconvex Variational Problem to a Linear, Convex Formulation
Egozcue, J. Meziat, R. Pedregal, P.
2002-12-19
We propose a general approach to deal with nonlinear, nonconvex variational problems based on a reformulation of the problem resulting in an optimization problem with linear cost functional and convex constraints. As a first step we explicitly explore these ideas to some one-dimensional variational problems and obtain specific conclusions of an analytical and numerical nature.
Aspects of complementarity and uncertainty
NASA Astrophysics Data System (ADS)
Vathsan, Radhika; Qureshi, Tabish
2016-08-01
The two-slit experiment with quantum particles provides many insights into the behavior of quantum mechanics, including Bohr’s complementarity principle. Here, we analyze Einstein’s recoiling slit version of the experiment and show how the inevitable entanglement between the particle and the recoiling slit as a which-way detector is responsible for complementarity. We derive the Englert-Greenberger-Yasin duality from this entanglement, which can also be thought of as a consequence of sum-uncertainty relations between certain complementary observables of the recoiling slit. Thus, entanglement is an integral part of the which-way detection process, and so is uncertainty, though in a completely different way from that envisaged by Bohr and Einstein.
ERIC Educational Resources Information Center
Kar, Tugrul
2016-01-01
This study examined prospective middle school mathematics teachers' problem-posing skills by investigating their ability to associate linear graphs with daily life situations. Prospective teachers were given linear graphs and asked to pose problems that could potentially be represented by the graphs. Their answers were analyzed in two stages. In…
Fundamental solution of the problem of linear programming and method of its determination
NASA Technical Reports Server (NTRS)
Petrunin, S. V.
1978-01-01
The idea of a fundamental solution to a problem in linear programming is introduced. A method of determining the fundamental solution and of applying this method to the solution of a problem in linear programming is proposed. Numerical examples are cited.
Using Parallel Banded Linear System Solvers in Generalized Eigenvalue Problems
1993-09-01
systems. The PPT algorithm is similar to an algorithm introduced by Lawrie and Sameh in [18]. The PDD algorithm is a variant of PPT which uses the fa-t...AND L. JOHNSSON, Solving banded systems on a parallel processor, Parallel Comput., 5 (1987), pp. 219-246. [10] J. J. DONGARRA AND A. SAMEH , On some...symmetric generalized matrix eigenvalur problem, SIAM J. Matrix Anal. Appl., 14 (1993). [18] D. H. LAWRIE AND A. H. SAMEH , The computation and
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.
1989-01-01
A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.
1989-01-01
A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.
Usefulness and problems of stereotactic radiosurgery using a linear accelerator.
Naoi, Y; Cho, N; Miyauchi, T; Iizuka, Y; Maehara, T; Katayama, H
1996-01-01
Since the introduction of linac radiosurgery in October 1994, we have treated 27 patients with 36 lesions. We treated nine AVM, 12 metastatic brain tumors, two malignant lymphomas, one anaplastic astrocytoma, two meningiomas, and one brain tumor of unknown pathology. In the follow-up examinations at least five months after treatment, the local control rate was 83% for the metastatic tumors, and two malignant lymphomas disappeared completely. In addition, satisfactory results have been obtained with AVM and other brain tumors without any side effects. In comparison with gamma-knife radiosurgery, linac radiosurgery has some disadvantages such as longer treatment time and cumbersome accuracy control, but if accuracy control is performed periodically, accuracies of 1 mm or less can be obtained. There is some strengths of linac radiosurgery as follow. 1) The acquisition cost is relatively low. 2) Dose distribution are equivalent to gamma-knife. 3) There is no field size limitation. 4) There is great flexibility in beam delivery and linac systems. Radiosurgery using linear accelerators seems to become widely accepted in the future.
Towards Resolving the Crab Sigma-Problem: A Linear Accelerator?
NASA Technical Reports Server (NTRS)
Contopoulos, Ioannis; Kazanas, Demosthenes; White, Nicholas E. (Technical Monitor)
2002-01-01
Using the exact solution of the axisymmetric pulsar magnetosphere derived in a previous publication and the conservation laws of the associated MHD flow, we show that the Lorentz factor of the outflowing plasma increases linearly with distance from the light cylinder. Therefore, the ratio of the Poynting to particle energy flux, generically referred to as sigma, decreases inversely proportional to distance, from a large value (typically approx. greater than 10(exp 4)) near the light cylinder to sigma approx. = 1 at a transition distance R(sub trans). Beyond this distance the inertial effects of the outflowing plasma become important and the magnetic field geometry must deviate from the almost monopolar form it attains between R(sub lc), and R(sub trans). We anticipate that this is achieved by collimation of the poloidal field lines toward the rotation axis, ensuring that the magnetic field pressure in the equatorial region will fall-off faster than 1/R(sup 2) (R being the cylindrical radius). This leads both to a value sigma = a(sub s) much less than 1 at the nebular reverse shock at distance R(sub s) (R(sub s) much greater than R(sub trans)) and to a component of the flow perpendicular to the equatorial component, as required by observation. The presence of the strong shock at R = R(sub s) allows for the efficient conversion of kinetic energy into radiation. We speculate that the Crab pulsar is unique in requiring sigma(sub s) approx. = 3 x 10(exp -3) because of its small translational velocity, which allowed for the shock distance R(sub s) to grow to values much greater than R(sub trans).
An application of a linear programing technique to nonlinear minimax problems
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1973-01-01
A differential correction technique for solving nonlinear minimax problems is presented. The basis of the technique is a linear programing algorithm which solves the linear minimax problem. By linearizing the original nonlinear equations about a nominal solution, both nonlinear approximation and estimation problems using the minimax norm may be solved iteratively. Some consideration is also given to improving convergence and to the treatment of problems with more than one measured quantity. A sample problem is treated with this technique and with the least-squares differential correction method to illustrate the properties of the minimax solution. The results indicate that for the sample approximation problem, the minimax technique provides better estimates than the least-squares method if a sufficient amount of data is used. For the sample estimation problem, the minimax estimates are better if the mathematical model is incomplete.
Gene Golub; Kwok Ko
2009-03-30
The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.
Global symmetry relations in linear and viscoplastic mobility problems
NASA Astrophysics Data System (ADS)
Kamrin, Ken; Goddard, Joe
2014-11-01
The mobility tensor of a textured surface is a homogenized effective boundary condition that describes the effective slip of a fluid adjacent to the surface in terms of an applied shear traction far above the surface. In the Newtonian fluid case, perturbation analysis yields a mobility tensor formula, which suggests that regardless of the surface texture (i.e. nonuniform hydrophobicity distribution and/or height fluctuations) the mobility tensor is always symmetric. This conjecture is verified using a Lorentz reciprocity argument. It motivates the question of whether such symmetries would arise for nonlinear constitutive relations and boundary conditions, where the mobility tensor is not a constant but a function of the applied stress. We show that in the case of a strongly dissipative nonlinear constitutive relation--one whose strain-rate relates to the stress solely through a scalar Edelen potential--and strongly dissipative surface boundary conditions--one whose hydrophobic character is described by a potential relating slip to traction--the mobility function of the surface also maintains tensorial symmetry. By extension, the same variational arguments can be applied in problems such as the permeability tensor for viscoplastic flow through porous media, and we find that similar symmetries arise. These findings could be used to simplify the characterization of viscoplastic drag in various anisotropic media. (Joe Goddard is a former graduate student of Acrivos).
Solution algorithms for non-linear singularly perturbed optimal control problems
NASA Technical Reports Server (NTRS)
Ardema, M. D.
1983-01-01
The applicability and usefulness of several classical and other methods for solving the two-point boundary-value problem which arises in non-linear singularly perturbed optimal control are assessed. Specific algorithms of the Picard, Newton and averaging types are formally developed for this class of problem. The computational requirements associated with each algorithm are analysed and compared with the computational requirement of the method of matched asymptotic expansions. Approximate solutions to a linear and a non-linear problem are obtained by each method and compared.
A New Bound for the Ration Between the 2-Matching Problem and Its Linear Programming Relaxation
Boyd, Sylvia; Carr, Robert
1999-07-28
Consider the 2-matching problem defined on the complete graph, with edge costs which satisfy the triangle inequality. We prove that the value of a minimum cost 2-matching is bounded above by 4/3 times the value of its linear programming relaxation, the fractional 2-matching problem. This lends credibility to a long-standing conjecture that the optimal value for the traveling salesman problem is bounded above by 4/3 times the value of its linear programming relaxation, the subtour elimination problem.
Zörnig, Peter
2015-08-01
We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.
Rescuing complementarity with little drama
NASA Astrophysics Data System (ADS)
Bao, Ning; Bouland, Adam; Chatwin-Davies, Aidan; Pollack, Jason; Yuen, Henry
2016-12-01
The AMPS paradox challenges black hole complementarity by apparently constructing a way for an observer to bring information from the outside of the black hole into its interior if there is no drama at its horizon, making manifest a violation of monogamy of entanglement. We propose a new resolution to the paradox: this violation cannot be explicitly checked by an infalling observer in the finite proper time they have to live after crossing the horizon. Our resolution depends on a weak relaxation of the no-drama condition (we call it "little-drama") which is the "complementarity dual" of scrambling of information on the stretched horizon. When translated to the description of the black hole interior, this implies that the fine-grained quantum information of infalling matter is rapidly diffused across the entire interior while classical observables and coarse-grained geometry remain unaffected. Under the assumption that information has diffused throughout the interior, we consider the difficulty of the information-theoretic task that an observer must perform after crossing the event horizon of a Schwarzschild black hole in order to verify a violation of monogamy of entanglement. We find that the time required to complete a necessary subroutine of this task, namely the decoding of Bell pairs from the interior and the late radiation, takes longer than the maximum amount of time that an observer can spend inside the black hole before hitting the singularity. Therefore, an infalling observer cannot observe monogamy violation before encountering the singularity.
Rescuing complementarity with little drama
Bao, Ning; Bouland, Adam; Chatwin-Davies, Aidan; Pollack, Jason; Yuen, Henry
2016-12-07
The AMPS paradox challenges black hole complementarity by apparently constructing a way for an observer to bring information from the outside of the black hole into its interior if there is no drama at its horizon, making manifest a violation of monogamy of entanglement. We propose a new resolution to the paradox: this violation cannot be explicitly checked by an infalling observer in the finite proper time they have to live after crossing the horizon. Our resolution depends on a weak relaxation of the no-drama condition (we call it “little-drama”) which is the “complementarity dual” of scrambling of information on the stretched horizon. When translated to the description of the black hole interior, this implies that the fine-grained quantum information of infalling matter is rapidly diffused across the entire interior while classical observables and coarse-grained geometry remain unaffected. Under the assumption that information has diffused throughout the interior, we consider the difficulty of the information-theoretic task that an observer must perform after crossing the event horizon of a Schwarzschild black hole in order to verify a violation of monogamy of entanglement. We find that the time required to complete a necessary subroutine of this task, namely the decoding of Bell pairs from the interior and the late radiation, takes longer than the maximum amount of time that an observer can spend inside the black hole before hitting the singularity. Furthermore, an infalling observer cannot observe monogamy violation before encountering the singularity.
Evaluation of linear solvers for oil reservoir simulation problems. Part 2: The fully implicit case
Joubert, W.; Janardhan, R.
1997-12-01
A previous paper [Joubert/Biswas 1997] contained investigations of linear solver performance for matrices arising from Amoco`s Falcon parallel oil reservoir simulation code using the IMPES formulation (implicit pressure, explicit saturation). In this companion paper, similar issues are explored for linear solvers applied to matrices arising from more difficult fully implicit problems. The results of numerical experiments are given.
NASA Astrophysics Data System (ADS)
Zhadan, V. G.
2016-07-01
The linear semidefinite programming problem is considered. The dual affine scaling method in which all current iterations belong to the feasible set is proposed for its solution. Moreover, the boundaries of the feasible set may be reached. This method is a generalization of a version of the affine scaling method that was earlier developed for linear programs to the case of semidefinite programming.
EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.
ERIC Educational Resources Information Center
Jarvis, John J.; And Others
Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…
Bramble, J.H.; Pasciak, J.E.
1981-01-01
The linearized scalar potential formulation of the magnetostatic field problem is considered. The approach involves a reformulation of the continuous problem as a parametric boundary problem. By the introduction of a spherical interface and the use of spherical harmonics, the infinite boundary condition can also be satisfied in the parametric framework. The reformulated problem is discretized by finite element techniques and a discrete parametric problem is solved by conjugate gradient iteration. This approach decouples the problem in that only standard Neumann type elliptic finite element systems on separate bounded domains need be solved. The boundary conditions at infinity and the interface conditions are satisfied during the boundary parametric iteration.
Xu, Andrew Wei
2010-09-01
In genome rearrangement, given a set of genomes G and a distance measure d, the median problem asks for another genome q that minimizes the total distance [Formula: see text]. This is a key problem in genome rearrangement based phylogenetic analysis. Although this problem is known to be NP-hard, we have shown in a previous article, on circular genomes and under the DCJ distance measure, that a family of patterns in the given genomes--represented by adequate subgraphs--allow us to rapidly find exact solutions to the median problem in a decomposition approach. In this article, we extend this result to the case of linear multichromosomal genomes, in order to solve more interesting problems on eukaryotic nuclear genomes. A multi-way capping problem in the linear multichromosomal case imposes an extra computational challenge on top of the difficulty in the circular case, and this difficulty has been underestimated in our previous study and is addressed in this article. We represent the median problem by the capped multiple breakpoint graph, extend the adequate subgraphs into the capped adequate subgraphs, and prove optimality-preserving decomposition theorems, which give us the tools to solve the median problem and the multi-way capping optimization problem together. We also develop an exact algorithm ASMedian-linear, which iteratively detects instances of (capped) adequate subgraphs and decomposes problems into subproblems. Tested on simulated data, ASMedian-linear can rapidly solve most problems with up to several thousand genes, and it also can provide optimal or near-optimal solutions to the median problem under the reversal/HP distance measures. ASMedian-linear is available at http://sites.google.com/site/andrewweixu .
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1986-01-01
An abstract approximation framework is developed for the finite and infinite time horizon discrete-time linear-quadratic regulator problem for systems whose state dynamics are described by a linear semigroup of operators on an infinite dimensional Hilbert space. The schemes included the framework yield finite dimensional approximations to the linear state feedback gains which determine the optimal control law. Convergence arguments are given. Examples involving hereditary and parabolic systems and the vibration of a flexible beam are considered. Spline-based finite element schemes for these classes of problems, together with numerical results, are presented and discussed.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1988-01-01
An abstract approximation framework is developed for the finite and infinite time horizon discrete-time linear-quadratic regulator problem for systems whose state dynamics are described by a linear semigroup of operators on an infinite dimensional Hilbert space. The schemes included the framework yield finite dimensional approximations to the linear state feedback gains which determine the optimal control law. Convergence arguments are given. Examples involving hereditary and parabolic systems and the vibration of a flexible beam are considered. Spline-based finite element schemes for these classes of problems, together with numerical results, are presented and discussed.
Newton's method for large bound-constrained optimization problems.
Lin, C.-J.; More, J. J.; Mathematics and Computer Science
1999-01-01
We analyze a trust region version of Newton's method for bound-constrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearly constrained problems and yields global and superlinear convergence without assuming either strict complementarity or linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large bound-constrained problems.
Illusion of Linearity in Geometry: Effect in Multiple-Choice Problems
ERIC Educational Resources Information Center
Vlahovic-Stetic, Vesna; Pavlin-Bernardic, Nina; Rajter, Miroslav
2010-01-01
The aim of this study was to examine if there is a difference in the performance on non-linear problems regarding age, gender, and solving situation, and whether the multiple-choice answer format influences students' thinking. A total of 112 students, aged 15-16 and 18-19, were asked to solve problems for which solutions based on proportionality…
Illusion of Linearity in Geometry: Effect in Multiple-Choice Problems
ERIC Educational Resources Information Center
Vlahovic-Stetic, Vesna; Pavlin-Bernardic, Nina; Rajter, Miroslav
2010-01-01
The aim of this study was to examine if there is a difference in the performance on non-linear problems regarding age, gender, and solving situation, and whether the multiple-choice answer format influences students' thinking. A total of 112 students, aged 15-16 and 18-19, were asked to solve problems for which solutions based on proportionality…
The synthesis of optimal controls for linear, time-optimal problems with retarded controls.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Jacobs, M. Q.; Latina, M. R.
1971-01-01
Optimization problems involving linear systems with retardations in the controls are studied in a systematic way. Some physical motivation for the problems is discussed. The topics covered are: controllability, existence and uniqueness of the optimal control, sufficient conditions, techniques of synthesis, and dynamic programming. A number of solved examples are presented.
ERIC Educational Resources Information Center
Acevedo Nistal, Ana; Van Dooren, Wim; Verschaffel, Lieven
2013-01-01
Thirty-six secondary school students aged 14-16 were interviewed while they chose between a table, a graph or a formula to solve three linear function problems. The justifications for their choices were classified as (1) task-related if they explicitly mentioned the to-be-solved problem, (2) subject-related if students mentioned their own…
Digital program for solving the linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, B.
1975-01-01
A computer program is described which solves the linear stochastic optimal control and estimation (LSOCE) problem by using a time-domain formulation. The LSOCE problem is defined as that of designing controls for a linear time-invariant system which is disturbed by white noise in such a way as to minimize a performance index which is quadratic in state and control variables. The LSOCE problem and solution are outlined; brief descriptions are given of the solution algorithms, and complete descriptions of each subroutine, including usage information and digital listings, are provided. A test case is included, as well as information on the IBM 7090-7094 DCS time and storage requirements.
Yu, Guoshen; Sapiro, Guillermo; Mallat, Stéphane
2012-05-01
A general framework for solving image inverse problems with piecewise linear estimations is introduced in this paper. The approach is based on Gaussian mixture models, which are estimated via a maximum a posteriori expectation-maximization algorithm. A dual mathematical interpretation of the proposed framework with a structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared with traditional sparse inverse problem techniques. We demonstrate that, in a number of image inverse problems, including interpolation, zooming, and deblurring of narrow kernels, the same simple and computationally efficient algorithm yields results in the same ballpark as that of the state of the art.
On high-continuity transfinite element formulations for linear-nonlinear transient thermal problems
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Railkar, Sudhir B.
1987-01-01
This paper describes recent developments in the applicability of a hybrid transfinite element methodology with emphasis on high-continuity formulations for linear/nonlinear transient thermal problems. The proposed concepts furnish accurate temperature distributions and temperature gradients making use of a relatively smaller number of degrees of freedom; and the methodology is applicable to linear/nonlinear thermal problems. Characteristic features of the formulations are described in technical detail as the proposed hybrid approach combines the major advantages and modeling features of high-continuity thermal finite elements in conjunction with transform methods and classical Galerkin schemes. Several numerical test problems are evaluated and the results obtained validate the proposed concepts for linear/nonlinear thermal problems.
Iterative algorithms for a non-linear inverse problem in atmospheric lidar
NASA Astrophysics Data System (ADS)
Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto
2017-08-01
We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.
Rescuing complementarity with little drama
Bao, Ning; Bouland, Adam; Chatwin-Davies, Aidan; ...
2016-12-07
The AMPS paradox challenges black hole complementarity by apparently constructing a way for an observer to bring information from the outside of the black hole into its interior if there is no drama at its horizon, making manifest a violation of monogamy of entanglement. We propose a new resolution to the paradox: this violation cannot be explicitly checked by an infalling observer in the finite proper time they have to live after crossing the horizon. Our resolution depends on a weak relaxation of the no-drama condition (we call it “little-drama”) which is the “complementarity dual” of scrambling of information onmore » the stretched horizon. When translated to the description of the black hole interior, this implies that the fine-grained quantum information of infalling matter is rapidly diffused across the entire interior while classical observables and coarse-grained geometry remain unaffected. Under the assumption that information has diffused throughout the interior, we consider the difficulty of the information-theoretic task that an observer must perform after crossing the event horizon of a Schwarzschild black hole in order to verify a violation of monogamy of entanglement. We find that the time required to complete a necessary subroutine of this task, namely the decoding of Bell pairs from the interior and the late radiation, takes longer than the maximum amount of time that an observer can spend inside the black hole before hitting the singularity. Furthermore, an infalling observer cannot observe monogamy violation before encountering the singularity.« less
Some comparison of restarted GMRES and QMR for linear and nonlinear problems
Morgan, R.; Joubert, W.
1994-12-31
Comparisons are made between the following methods: QMR including its transpose-free version, restarted GMRES, and a modified restarted GMRES that uses approximate eigenvectors to improve convergence, For some problems, the modified GMRES is competitive with or better than QMR in terms of the number of matrix-vector products. Also, the GMRES methods can be much better when several similar systems of linear equations must be solved, as in the case of nonlinear problems and ODE problems.
Upper error bounds on calculated outputs of interest for linear and nonlinear structural problems
NASA Astrophysics Data System (ADS)
Ladevèze, Pierre
2006-07-01
This Note introduces new strict upper error bounds on outputs of interest for linear as well as time-dependent nonlinear structural problems calculated by the finite element method. Small-displacement problems without softening, such as (visco)plasticity problems, are included through the standard thermodynamics framework involving internal state variables. To cite this article: P. Ladevèze, C. R. Mecanique 334 (2006).
A new neural network model for solving random interval linear programming problems.
Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza
2017-05-01
This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique.
Initial-value problem for a linear ordinary differential equation of noninteger order
Pskhu, Arsen V
2011-04-30
An initial-value problem for a linear ordinary differential equation of noninteger order with Riemann-Liouville derivatives is stated and solved. The initial conditions of the problem ensure that (by contrast with the Cauchy problem) it is uniquely solvable for an arbitrary set of parameters specifying the orders of the derivatives involved in the equation; these conditions are necessary for the equation under consideration. The problem is reduced to an integral equation; an explicit representation of the solution in terms of the Wright function is constructed. As a consequence of these results, necessary and sufficient conditions for the solvability of the Cauchy problem are obtained. Bibliography: 7 titles.
Averaging and Linear Programming in Some Singularly Perturbed Problems of Optimal Control
Gaitsgory, Vladimir; Rossomakhine, Sergey
2015-04-15
The paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed (SP) optimal controls problems (that is, problems of optimal control of SP systems) considered on the infinite time horizon. We mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well. Our consideration is based on earlier results on averaging of SP control systems and on linear programming formulations of optimal control problems. The idea that we exploit is to first asymptotically approximate a given problem of optimal control of the SP system by a certain averaged optimal control problem, then reformulate this averaged problem as an infinite-dimensional linear programming (LP) problem, and then approximate the latter by semi-infinite LP problems. We show that the optimal solution of these semi-infinite LP problems and their duals (that can be found with the help of a modification of an available LP software) allow one to construct near optimal controls of the SP system. We demonstrate the construction with two numerical examples.
NASA Astrophysics Data System (ADS)
Abramov, A. A.; Yukhno, L. F.
2017-08-01
Numerical methods are proposed for solving some problems for a system of linear ordinary differential equations in which the basic conditions (which are generally nonlocal ones specified by a Stieltjes integral) are supplemented with redundant (possibly nonlocal) conditions. The system of equations is considered on a finite or infinite interval. The problem of solving the inhomogeneous system of equations and a nonlinear eigenvalue problem are considered. Additionally, the special case of a self-adjoint eigenvalue problem for a Hamiltonian system is addressed. In the general case, these problems have no solutions. A principle for constructing an auxiliary system that replaces the original one and is normally consistent with all specified conditions is proposed. For each problem, a numerical method for solving the corresponding auxiliary problem is described. The method is numerically stable if so is the constructed auxiliary problem.
Role of complementarity in superdense coding
NASA Astrophysics Data System (ADS)
Coles, Patrick J.
2013-12-01
The complementarity of two observables is often captured in uncertainty relations, which quantify an inevitable trade-off in knowledge. Here we study complementarity in the context of an information-processing task: we link the complementarity of two observables to their usefulness for superdense coding (SDC). In SDC, Alice sends two classical dits of information to Bob by sending a single qudit. However, we show that encoding with commuting unitaries prevents Alice from sending more than one dit per qudit, implying that complementarity is necessary for SDC to be advantageous over a classical strategy for information transmission. When Alice encodes with products of Pauli operators for the X and Z bases, we quantify the complementarity of these encodings in terms of the overlap of the X and Z basis elements. Our main result explicitly solves for the SDC capacity as a function of the complementarity, showing that the entropy of the overlap matrix gives the capacity, when the preshared state is maximally entangled. We generalize this equation to resources with symmetric noise such as a preshared Werner state. In the most general case of arbitrary noisy resources, we obtain an analogous lower bound on the SDC capacity. Our results shed light on the role of complementarity in determining the quantum advantage in SDC and also seem fundamentally interesting since they bear a striking resemblance to uncertainty relations.
A strictly improving linear programming alorithm based on a series of Phase 1 problems
Leichner, S.A.; Dantzig, G.B.; Davis, J.W.
1992-04-01
When used on degenerate problems, the simplex method often takes a number of degenerate steps at a particular vertex before moving to the next. In theory (although rarely in practice), the simplex method can actually cycle at such a degenerate point. Instead of trying to modify the simplex method to avoid degenerate steps, we have developed a new linear programming algorithm that is completely impervious to degeneracy. This new method solves the Phase II problem of finding an optimal solution by solving a series of Phase I feasibility problems. Strict improvement is attained at each iteration in the Phase I algorithm, and the Phase II sequence of feasibility problems has linear convergence in the number of Phase I problems. When tested on the 30 smallest NETLIB linear programming test problems, the computational results for the new Phase II algorithm were over 15% faster than the simplex method; on some problems, it was almost two times faster, and on one problem it was four times faster.
Cichocki, A; Unbehauen, R
1994-01-01
In this paper a new class of simplified low-cost analog artificial neural networks with on chip adaptive learning algorithms are proposed for solving linear systems of algebraic equations in real time. The proposed learning algorithms for linear least squares (LS), total least squares (TLS) and data least squares (DLS) problems can be considered as modifications and extensions of well known algorithms: the row-action projection-Kaczmarz algorithm and/or the LMS (Adaline) Widrow-Hoff algorithms. The algorithms can be applied to any problem which can be formulated as a linear regression problem. The correctness and high performance of the proposed neural networks are illustrated by extensive computer simulation results.
Reintroducing the Concept of Complementarity into Psychology.
Wang, Zheng; Busemeyer, Jerome
2015-01-01
Central to quantum theory is the concept of complementarity. In this essay, we argue that complementarity is also central to the emerging field of quantum cognition. We review the concept, its historical roots in psychology, and its development in quantum physics and offer examples of how it can be used to understand human cognition. The concept of complementarity provides a valuable and fresh perspective for organizing human cognitive phenomena and for understanding the nature of measurements in psychology. In turn, psychology can provide valuable new evidence and theoretical ideas to enrich this important scientific concept.
Reintroducing the Concept of Complementarity into Psychology
Wang, Zheng; Busemeyer, Jerome
2015-01-01
Central to quantum theory is the concept of complementarity. In this essay, we argue that complementarity is also central to the emerging field of quantum cognition. We review the concept, its historical roots in psychology, and its development in quantum physics and offer examples of how it can be used to understand human cognition. The concept of complementarity provides a valuable and fresh perspective for organizing human cognitive phenomena and for understanding the nature of measurements in psychology. In turn, psychology can provide valuable new evidence and theoretical ideas to enrich this important scientific concept. PMID:26640454
Observation of complementarity in the macroscopic domain
Cao Dezhong; Xiong Jun; Tang Hua; Lin Lufang; Zhang Suheng; Wang Kaige
2007-09-15
Complementarity is usually considered as a phenomenon of microscopic systems. In this paper, we report an experimental observation of complementarity in correlated double-slit interference with a pseudothermal light source. The thermal light beam is divided into test and reference beams which are correlated with each other. The double slit is set in the test arm, and an interference pattern can be observed in the intensity correlation between the two arms. The experimental results show that the disappearance of the interference fringe depends on whether which-path information is gained through the reference arm. The experiment therefore shows complementarity occurring in the macroscopic domain.
Multi-point transmission problems for Sturm-Liouville equation with an abstract linear operator
NASA Astrophysics Data System (ADS)
Muhtarov, Fahreddin; Kandemir, Mustafa; Mukhtarov, O. Sh.
2017-04-01
In this paper, we consider the spectral problem for the equation -u″(x) + (A + λI)u(x) = f(x) on the two disjoint intervals (-1, 0) and (0, 1) together with multi-point boundary conditions and supplementary transmission conditions at the point of interaction x = 0, where A is an abstract linear operator. So, our problem is not a pure differential boundary-value one. Starting with the analysis of the principal part of the problem, the coercive estimates, the Fredholmness and isomorphism are established for the main problem. The obtained results are new even in the case of boundary conditions without internal points.
Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem
Yoo, Jaechil
1996-12-31
Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.
On Development of a Problem Based Learning System for Linear Algebra with Simple Input Method
NASA Astrophysics Data System (ADS)
Yokota, Hisashi
2011-08-01
Learning how to express a matrix using a keyboard inputs requires a lot of time for most of college students. Therefore, for a problem based learning system for linear algebra to be accessible for college students, it is inevitable to develop a simple method for expressing matrices. Studying the two most widely used input methods for expressing matrices, a simpler input method for expressing matrices is obtained. Furthermore, using this input method and educator's knowledge structure as a concept map, a problem based learning system for linear algebra which is capable of assessing students' knowledge structure and skill is developed.
Bohrian Complementarity in the Light of Kantian Teleology
NASA Astrophysics Data System (ADS)
Pringe, Hernán
2014-03-01
The Kantian influences on Bohr's thought and the relationship between the perspective of complementarity in physics and in biology seem at first sight completely unrelated issues. However, the goal of this work is to show their intimate connection. We shall see that Bohr's views on biology shed light on Kantian elements of his thought, which enables a better understanding of his complementary interpretation of quantum theory. For this purpose, we shall begin by discussing Bohr's views on the analogies concerning the epistemological situation in biology and in physics. Later, we shall compare the Bohrian and the Kantian approaches to the science of life in order to show their close connection. On this basis, we shall finally turn to the issue of complementarity in quantum theory in order to assess what we can learn about the epistemological problems in the quantum realm from a consideration of Kant's views on teleology.
The Kantian framework of complementarity
NASA Astrophysics Data System (ADS)
Cuffaro, Michael
A growing number of commentators have, in recent years, noted the important affinities in the views of Immanuel Kant and Niels Bohr. While these commentators are correct, the picture they present of the connections between Bohr and Kant is painted in broad strokes; it is open to the criticism that these affinities are merely superficial. In this essay, I provide a closer, structural, analysis of both Bohr's and Kant's views that makes these connections more explicit. In particular, I demonstrate the similarities between Bohr's argument, on the one hand, that neither the wave nor the particle description of atomic phenomena pick out an object in the ordinary sense of the word, and Kant's requirement, on the other hand, that both 'mathematical' (having to do with magnitude) and 'dynamical' (having to do with an object's interaction with other objects) principles must be applicable to appearances in order for us to determine them as objects of experience. I argue that Bohr's 'complementarity interpretation' of quantum mechanics, which views atomic objects as idealizations, and which licenses the repeal of the principle of causality for the domain of atomic physics, is perfectly compatible with, and indeed follows naturally from a broadly Kantian epistemological framework.
A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints
NASA Technical Reports Server (NTRS)
Hanson, R. J.; Krogh, Fred T.
1992-01-01
A new algorithm for solving nonlinear least-squares and nonlinear equation problems is proposed which is based on approximating the nonlinear functions using the quadratic-tensor model by Schnabel and Frank. The algorithm uses a trust region defined by a box containing the current values of the unknowns. The algorithm is found to be effective for problems with linear constraints and dense Jacobian matrices.
A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints
NASA Technical Reports Server (NTRS)
Hanson, R. J.; Krogh, Fred T.
1992-01-01
A new algorithm for solving nonlinear least-squares and nonlinear equation problems is proposed which is based on approximating the nonlinear functions using the quadratic-tensor model by Schnabel and Frank. The algorithm uses a trust region defined by a box containing the current values of the unknowns. The algorithm is found to be effective for problems with linear constraints and dense Jacobian matrices.
A systematic linear space approach to solving partially described inverse eigenvalue problems
NASA Astrophysics Data System (ADS)
Hu, Sau-Lon James; Li, Haujun
2008-06-01
Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.
NASA Technical Reports Server (NTRS)
Bauld, N. R., Jr.; Goree, J. G.
1983-01-01
The accuracy of the finite difference method in the solution of linear elasticity problems that involve either a stress discontinuity or a stress singularity is considered. Solutions to three elasticity problems are discussed in detail: a semi-infinite plane subjected to a uniform load over a portion of its boundary; a bimetallic plate under uniform tensile stress; and a long, midplane symmetric, fiber reinforced laminate subjected to uniform axial strain. Finite difference solutions to the three problems are compared with finite element solutions to corresponding problems. For the first problem a comparison with the exact solution is also made. The finite difference formulations for the three problems are based on second order finite difference formulas that provide for variable spacings in two perpendicular directions. Forward and backward difference formulas are used near boundaries where their use eliminates the need for fictitious grid points.
Analysis of junior high school students' attempt to solve a linear inequality problem
NASA Astrophysics Data System (ADS)
Taqiyuddin, Muhammad; Sumiaty, Encum; Jupri, Al
2017-08-01
Linear inequality is one of fundamental subjects within junior high school mathematics curricula. Several studies have been conducted to asses students' perform on linear inequality. However, it can hardly be found that linear inequality problems are in the form of "ax + b < dx + e" with "a, d ≠ 0", and "a ≠ d" as it can be seen on the textbook used by Indonesian students and several studies. This condition leads to the research questions concerning students' attempt on solving a simple linear inequality problem in this form. In order to do so, the written test was administered to 58 students from two schools in Bandung followed by interviews. The other sources of the data are from teachers' interview and mathematics books used by students. After that, the constant comparative method was used to analyse the data. The result shows that the majority approached the question by doing algebraic operations. Interestingly, most of them did it incorrectly. In contrast, algebraic operations were correctly used by some of them. Moreover, the others performed expected-numbers solution, rewriting the question, translating the inequality into words, and blank answer. Furthermore, we found that there is no one who was conscious of the existence of all-numbers solution. It was found that this condition is reasonably due to how little the learning components concern about why a procedure of solving a linear inequality works and possibilities of linear inequality solution.
Interpersonal complementarity in the mental health intake: a mixed-methods study.
Rosen, Daniel C; Miller, Alisa B; Nakash, Ora; Halpern, Lucila; Halperin, Lucila; Alegría, Margarita
2012-04-01
The study examined which socio-demographic differences between clients and providers influenced interpersonal complementarity during an initial intake session; that is, behaviors that facilitate harmonious interactions between client and provider. Complementarity was assessed using blinded ratings of 114 videotaped intake sessions by trained observers. Hierarchical linear models were used to examine how match between client and provider in race/ethnicity, sex, and age were associated with levels of complementarity. A qualitative analysis investigated potential mechanisms that accounted for overall complementarity beyond match by examining client-provider dyads in the top and bottom quartiles of the complementarity measure. Results indicated significant interactions between client's race/ethnicity (Black) and provider's race/ethnicity (Latino) (p = .036) and client's age and provider's age (p = .044) on the Affiliation axis. The qualitative investigation revealed that client-provider interactions in the upper quartile of complementarity were characterized by consistent descriptions between the client and provider of concerns and expectations as well as depictions of what was important during the meeting. Results suggest that differences in social identities, although important, may be overcome by interpersonal variables early in the therapeutic relationship. Implications for both clinical practice and future research are discussed, as are factors relevant to working across cultures.
High Order Finite Difference Methods, Multidimensional Linear Problems and Curvilinear Coordinates
NASA Technical Reports Server (NTRS)
Nordstrom, Jan; Carpenter, Mark H.
1999-01-01
Boundary and interface conditions are derived for high order finite difference methods applied to multidimensional linear problems in curvilinear coordinates. The boundary and interface conditions lead to conservative schemes and strict and strong stability provided that certain metric conditions are met.
Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.
ERIC Educational Resources Information Center
Shama, Gilli; Dreyfus, Tommy
1994-01-01
Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…
Linear Integro-differential Schroedinger and Plate Problems Without Initial Conditions
Lorenzi, Alfredo
2013-06-15
Via Carleman's estimates we prove uniqueness and continuous dependence results for the temporal traces of solutions to overdetermined linear ill-posed problems related to Schroedinger and plate equation. The overdetermination is prescribed in an open subset of the (space-time) lateral boundary.
The Tricomi problem of a quasi-linear Lavrentiev-Bitsadze mixed type equation
NASA Astrophysics Data System (ADS)
Shuxing, Chen; Zhenguo, Feng
2013-06-01
In this paper, we consider the Tricomi problem of a quasi-linear Lavrentiev-Bitsadze mixed type equation begin{array}{lll}(sgn u_y) {partial ^2 u/partial x^2} + {partial ^2 u/partial y^2}-1=0, whose coefficients depend on the first-order derivative of the unknown function. We prove the existence of solution to this problem by using the hodograph transformation. The method can be applied to study more difficult problems for nonlinear mixed type equations arising in gas dynamics.
A general algorithm for control problems with variable parameters and quasi-linear models
NASA Astrophysics Data System (ADS)
Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.
2015-12-01
This paper presents an algorithm that is able to solve optimal control problems in which the modelling of the system contains variable parameters, with the added complication that, in certain cases, these parameters can lead to control problems governed by quasi-linear equations. Combining the techniques of Pontryagin's Maximum Principle and the shooting method, an algorithm has been developed that is not affected by the values of the parameters, being able to solve conventional problems as well as cases in which the optimal solution is shown to be bang-bang with singular arcs.
Geometric tools for solving the FDI problem for linear periodic discrete-time systems
NASA Astrophysics Data System (ADS)
Longhi, Sauro; Monteriù, Andrea
2013-07-01
This paper studies the problem of detecting and isolating faults in linear periodic discrete-time systems. The aim is to design an observer-based residual generator where each residual is sensitive to one fault, whilst remaining insensitive to the other faults that can affect the system. Making use of the geometric tools, and in particular of the outer observable subspace notion, the Fault Detection and Isolation (FDI) problem is formulated and necessary and solvability conditions are given. An algorithmic procedure is described to determine the solution of the FDI problem.
NASA Technical Reports Server (NTRS)
Kent, James; Holdaway, Daniel
2015-01-01
A number of geophysical applications require the use of the linearized version of the full model. One such example is in numerical weather prediction, where the tangent linear and adjoint versions of the atmospheric model are required for the 4DVAR inverse problem. The part of the model that represents the resolved scale processes of the atmosphere is known as the dynamical core. Advection, or transport, is performed by the dynamical core. It is a central process in many geophysical applications and is a process that often has a quasi-linear underlying behavior. However, over the decades since the advent of numerical modelling, significant effort has gone into developing many flavors of high-order, shape preserving, nonoscillatory, positive definite advection schemes. These schemes are excellent in terms of transporting the quantities of interest in the dynamical core, but they introduce nonlinearity through the use of nonlinear limiters. The linearity of the transport schemes used in Goddard Earth Observing System version 5 (GEOS-5), as well as a number of other schemes, is analyzed using a simple 1D setup. The linearized version of GEOS-5 is then tested using a linear third order scheme in the tangent linear version.
The linearized characteristics method and its application to practical nonlinear supersonic problems
NASA Technical Reports Server (NTRS)
Ferri, Antonio
1952-01-01
The methods of characteristics has been linearized by assuming that the flow field can be represented as a basic flow field determined by nonlinearized methods and a linearized superposed flow field that accounts for small changes of boundary conditions. The method has been applied to two-dimensional rotational flow where the basic flow is potential flow and to axially symmetric problems where conical flows have been used as the basic flows. In both cases the method allows the determination of the flow field to be simplified and the numerical work to be reduced to a few calculations. The calculations of axially symmetric flow can be simplified if tabulated values of some coefficients of the conical flow are obtained. The method has also been applied to slender bodies without symmetry and to some three-dimensional wing problems where two-dimensional flow can be used as the basic flow. Both problems were unsolved before in the approximation of nonlinear flow.
Voila: A visual object-oriented iterative linear algebra problem solving environment
Edwards, H.C.; Hayes, L.J.
1994-12-31
Application of iterative methods to solve a large linear system of equations currently involves writing a program which calls iterative method subprograms from a large software package. These subprograms have complex interfaces which are difficult to use and even more difficult to program. A problem solving environment specifically tailored to the development and application of iterative methods is needed. This need will be fulfilled by Voila, a problem solving environment which provides a visual programming interface to object-oriented iterative linear algebra kernels. Voila will provide several quantum improvements over current iterative method problem solving environments. First, programming and applying iterative methods is considerably simplified through Voila`s visual programming interface. Second, iterative method algorithm implementations are independent of any particular sparse matrix data structure through Voila`s object-oriented kernels. Third, the compile-link-debug process is eliminated as Voila operates as an interpreter.
Solving and analyzing side-chain positioning problems using linear and integer programming.
Kingsford, Carleton L; Chazelle, Bernard; Singh, Mona
2005-04-01
Side-chain positioning is a central component of homology modeling and protein design. In a common formulation of the problem, the backbone is fixed, side-chain conformations come from a rotamer library, and a pairwise energy function is optimized. It is NP-complete to find even a reasonable approximate solution to this problem. We seek to put this hardness result into practical context. We present an integer linear programming (ILP) formulation of side-chain positioning that allows us to tackle large problem sizes. We relax the integrality constraint to give a polynomial-time linear programming (LP) heuristic. We apply LP to position side chains on native and homologous backbones and to choose side chains for protein design. Surprisingly, when positioning side chains on native and homologous backbones, optimal solutions using a simple, biologically relevant energy function can usually be found using LP. On the other hand, the design problem often cannot be solved using LP directly; however, optimal solutions for large instances can still be found using the computationally more expensive ILP procedure. While different energy functions also affect the difficulty of the problem, the LP/ILP approach is able to find optimal solutions. Our analysis is the first large-scale demonstration that LP-based approaches are highly effective in finding optimal (and successive near-optimal) solutions for the side-chain positioning problem.
Stable computation of search directions for near-degenerate linear programming problems
Hough, P.D.
1997-03-01
In this paper, we examine stability issues that arise when computing search directions ({delta}x, {delta}y, {delta} s) for a primal-dual path-following interior point method for linear programming. The dual step {delta}y can be obtained by solving a weighted least-squares problem for which the weight matrix becomes extremely il conditioned near the boundary of the feasible region. Hough and Vavisis proposed using a type of complete orthogonal decomposition (the COD algorithm) to solve such a problem and presented stability results. The work presented here addresses the stable computation of the primal step {delta}x and the change in the dual slacks {delta}s. These directions can be obtained in a straight-forward manner, but near-degeneracy in the linear programming instance introduces ill-conditioning which can cause numerical problems in this approach. Therefore, we propose a new method of computing {delta}x and {delta}s. More specifically, this paper describes and orthogonal projection algorithm that extends the COD method. Unlike other algorithms, this method is stable for interior point methods without assuming nondegeneracy in the linear programming instance. Thus, it is more general than other algorithms on near-degenerate problems.
An algorithm for the weighting matrices in the sampled-data optimal linear regulator problem
NASA Technical Reports Server (NTRS)
Armstrong, E. S.; Caglayan, A. K.
1976-01-01
The sampled-data optimal linear regulator problem provides a means whereby a control designer can use an understanding of continuous optimal regulator design to produce a digital state variable feedback control law which satisfies continuous system performance specifications. A basic difficulty in applying the sampled-data regulator theory is the requirement that certain digital performance index weighting matrices, expressed as complicated functions of system matrices, be computed. Infinite series representations are presented for the weighting matrices of the time-invariant version of the optimal linear sampled-data regulator problem. Error bounds are given for estimating the effect of truncating the series expressions after a finite number of terms, and a method is described for their computer implementation. A numerical example is given to illustrate the results.
NASA Astrophysics Data System (ADS)
Vasant, P.; Ganesan, T.; Elamvazuthi, I.
2012-11-01
A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.
Evaluation of boundary element methods for the EEG forward problem: Effect of linear interpolation
Schlitt, H.A.; Heller, L.; Best, E.; Ranken, D.M. ); Aaron, R. )
1995-01-01
We implement the approach for solving the boundary integral equation for the electroencephalography (EEG) forward problem proposed by de Munck, in which the electric potential varies linearly across each plane triangle of the mesh. Previous solutions have assumed the potential is constant across an element. We calculate the electric potential and systematically investigate the effect of different mesh choices and dipole locations by using a three concentric sphere head model for which there is an analytic solution. Implementing the linear interpolation approximation results in errors that are approximately half those of the same mesh when the potential is assumed to be constant, and provides a reliable method for solving the problem. 12 refs., 8 figs.
Well-posedness of the time-varying linear electromagnetic initial-boundary value problem
NASA Astrophysics Data System (ADS)
Xie, Li; Lei, Yin-Zhao
2007-09-01
The well-posedness of the initial-boundary value problem of the time-varying linear electromagnetic field in a multi-medium region is investigated. Function spaces are defined, with Faraday's law of electromagnetic induction and the initial-boundary conditions considered as constraints. Gauss's formula applied to a multi-medium region is used to derive the energy-estimating inequality. After converting the initial-boundary conditions into homogeneous ones and analysing the characteristics of an operator introduced according to the total current law, the existence, uniqueness and stability of the weak solution to the initial-boundary value problem of the time-varying linear electromagnetic field are proved.
Observations on the linear programming formulation of the single reflector design problem.
Canavesi, Cristina; Cassarly, William J; Rolland, Jannick P
2012-02-13
We implemented the linear programming approach proposed by Oliker and by Wang to solve the single reflector problem for a point source and a far-field target. The algorithm was shown to produce solutions that aim the input rays at the intersections between neighboring reflectors. This feature makes it possible to obtain the same reflector with a low number of rays - of the order of the number of targets - as with a high number of rays, greatly reducing the computation complexity of the problem.
NASA Astrophysics Data System (ADS)
Perrone, Antonio L.; Basti, Gianfranco
1995-04-01
With respect to Rosenblatt linear perceptron, two classical limitation theorems demonstrated by M. Minsky and S. Papert are discussed. These two theorems, `(Psi) One-in-a-box' and `(Psi) Parity,' ultimately concern the intrinsic limitations of parallel calculations in pattern recognition problems. We demonstrate a possible solution of these limitation problems by substituting the static definition of characteristic functions and of their domains in the `geometrical' perceptron, with their dynamic definition. This dynamic consists in the mutual redefinition of the characteristic function and of its domain depending on the matching with the input.
Solution of second order quasi-linear boundary value problems by a wavelet method
Zhang, Lei; Zhou, Youhe; Wang, Jizeng
2015-03-10
A wavelet Galerkin method based on expansions of Coiflet-like scaling function bases is applied to solve second order quasi-linear boundary value problems which represent a class of typical nonlinear differential equations. Two types of typical engineering problems are selected as test examples: one is about nonlinear heat conduction and the other is on bending of elastic beams. Numerical results are obtained by the proposed wavelet method. Through comparing to relevant analytical solutions as well as solutions obtained by other methods, we find that the method shows better efficiency and accuracy than several others, and the rate of convergence can even reach orders of 5.8.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1987-01-01
In the optimal linear quadratic regulator problem for finite dimensional systems, the method known as an alpha-shift can be used to produce a closed-loop system whose spectrum lies to the left of some specified vertical line; that is, a closed-loop system with a prescribed degree of stability. This paper treats the extension of the alpha-shift to hereditary systems. As infinite dimensions, the shift can be accomplished by adding alpha times the identity to the open-loop semigroup generator and then solving an optimal regulator problem. However, this approach does not work with a new approximation scheme for hereditary control problems recently developed by Kappel and Salamon. Since this scheme is among the best to date for the numerical solution of the linear regulator problem for hereditary systems, an alternative method for shifting the closed-loop spectrum is needed. An alpha-shift technique that can be used with the Kappel-Salamon approximation scheme is developed. Both the continuous-time and discrete-time problems are considered. A numerical example which demonstrates the feasibility of the method is included.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1985-01-01
In the optimal linear quadratic regulator problem for finite dimensional systems, the method known as an alpha-shift can be used to produce a closed-loop system whose spectrum lies to the left of some specified vertical line; that is, a closed-loop system with a prescribed degree of stability. This paper treats the extension of the alpha-shift to hereditary systems. As infinite dimensions, the shift can be accomplished by adding alpha times the identity to the open-loop semigroup generator and then solving an optimal regulator problem. However, this approach does not work with a new approximation scheme for hereditary control problems recently developed by Kappel and Salamon. Since this scheme is among the best to date for the numerical solution of the linear regulator problem for hereditary systems, an alternative method for shifting the closed-loop spectrum is needed. An alpha-shift technique that can be used with the Kappel-Salamon approximation scheme is developed. Both the continuous-time and discrete-time problems are considered. A numerical example which demonstrates the feasibility of the method is included.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1987-01-01
In the optimal linear quadratic regulator problem for finite dimensional systems, the method known as an alpha-shift can be used to produce a closed-loop system whose spectrum lies to the left of some specified vertical line; that is, a closed-loop system with a prescribed degree of stability. This paper treats the extension of the alpha-shift to hereditary systems. As infinite dimensions, the shift can be accomplished by adding alpha times the identity to the open-loop semigroup generator and then solving an optimal regulator problem. However, this approach does not work with a new approximation scheme for hereditary control problems recently developed by Kappel and Salamon. Since this scheme is among the best to date for the numerical solution of the linear regulator problem for hereditary systems, an alternative method for shifting the closed-loop spectrum is needed. An alpha-shift technique that can be used with the Kappel-Salamon approximation scheme is developed. Both the continuous-time and discrete-time problems are considered. A numerical example which demonstrates the feasibility of the method is included.
Lorber, A.A.; Carey, G.F.; Bova, S.W.; Harle, C.H.
1996-12-31
The connection between the solution of linear systems of equations by iterative methods and explicit time stepping techniques is used to accelerate to steady state the solution of ODE systems arising from discretized PDEs which may involve either physical or artificial transient terms. Specifically, a class of Runge-Kutta (RK) time integration schemes with extended stability domains has been used to develop recursion formulas which lead to accelerated iterative performance. The coefficients for the RK schemes are chosen based on the theory of Chebyshev iteration polynomials in conjunction with a local linear stability analysis. We refer to these schemes as Chebyshev Parameterized Runge Kutta (CPRK) methods. CPRK methods of one to four stages are derived as functions of the parameters which describe an ellipse {Epsilon} which the stability domain of the methods is known to contain. Of particular interest are two-stage, first-order CPRK and four-stage, first-order methods. It is found that the former method can be identified with any two-stage RK method through the correct choice of parameters. The latter method is found to have a wide range of stability domains, with a maximum extension of 32 along the real axis. Recursion performance results are presented below for a model linear convection-diffusion problem as well as non-linear fluid flow problems discretized by both finite-difference and finite-element methods.
A Conforming Multigrid Method for the Pure Traction Problem of Linear Elasticity: Mixed Formulation
NASA Technical Reports Server (NTRS)
Lee, Chang-Ock
1996-01-01
A multigrid method using conforming P-1 finite element is developed for the two-dimensional pure traction boundary value problem of linear elasticity. The convergence is uniform even as the material becomes nearly incompressible. A heuristic argument for acceleration of the multigrid method is discussed as well. Numerical results with and without this acceleration as well as performance estimates on a parallel computer are included.
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Teglas, Russell
1987-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1984-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
A Conforming Multigrid Method for the Pure Traction Problem of Linear Elasticity: Mixed Formulation
NASA Technical Reports Server (NTRS)
Lee, Chang-Ock
1996-01-01
A multigrid method using conforming P-1 finite element is developed for the two-dimensional pure traction boundary value problem of linear elasticity. The convergence is uniform even as the material becomes nearly incompressible. A heuristic argument for acceleration of the multigrid method is discussed as well. Numerical results with and without this acceleration as well as performance estimates on a parallel computer are included.
The solution of the optimization problem of small energy complexes using linear programming methods
NASA Astrophysics Data System (ADS)
Ivanin, O. A.; Director, L. B.
2016-11-01
Linear programming methods were used for solving the optimization problem of schemes and operation modes of distributed generation energy complexes. Applicability conditions of simplex method, applied to energy complexes, including installations of renewable energy (solar, wind), diesel-generators and energy storage, considered. The analysis of decomposition algorithms for various schemes of energy complexes was made. The results of optimization calculations for energy complexes, operated autonomously and as a part of distribution grid, are presented.
Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report
Saad, Yousef
2014-01-16
The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the
Dual mean field search for large scale linear and quadratic knapsack problems
NASA Astrophysics Data System (ADS)
Banda, Juan; Velasco, Jonás; Berrones, Arturo
2017-07-01
An implementation of mean field annealing to deal with large scale linear and non linear binary optimization problems is given. Mean field annealing is based on the analogy between combinatorial optimization and interacting physical systems at thermal equilibrium. Specifically, a mean field approximation of the Boltzmann distribution given by a Lagrangian that encompass the objective function and the constraints is calculated. The original discrete task is in this way transformed into a continuous variational problem. In our version of mean field annealing, no temperature parameter is used, but a good starting point in the dual space is given by a ;thermodynamic limit; argument. The method is tested in linear and quadratic knapsack problems with sizes that are considerably larger than those used in previous studies of mean field annealing. Dual mean field annealing is capable to find high quality solutions in running times that are orders of magnitude shorter than state of the art algorithms. Moreover, as may be expected for a mean field theory, the solutions tend to be more accurate as the number of variables grow.
Complementarity of genuine multipartite Bell nonlocality
NASA Astrophysics Data System (ADS)
Sami, Sasha; Chakrabarty, Indranil; Chaturvedi, Anubhav
2017-08-01
We introduce a feature of no-signaling (Bell) nonlocal theories: namely, when a system of multiple parties manifests genuine nonlocal correlation, then there cannot be arbitrarily high nonlocal correlation among any subset of the parties. We call this feature complementarity of genuine multipartite nonlocality. We use Svetlichny's criterion for genuine multipartite nonlocality and nonlocal games to derive the complementarity relations under no-signaling constraints. We find that the complementarity relations are tightened for the much stricter quantum constraints. We compare this notion with the well-known notion of monogamy of nonlocality. As a consequence, we obtain tighter nontrivial monogamy relations that take into account genuine multipartite nonlocality. Furthermore, we provide numerical evidence showcasing this feature using a bipartite measure and several other well-known tripartite measures of nonlocality.
Scilab software as an alternative low-cost computing in solving the linear equations problem
NASA Astrophysics Data System (ADS)
Agus, Fahrul; Haviluddin
2017-02-01
Numerical computation packages are widely used both in teaching and research. These packages consist of license (proprietary) and open source software (non-proprietary). One of the reasons to use the package is a complexity of mathematics function (i.e., linear problems). Also, number of variables in a linear or non-linear function has been increased. The aim of this paper was to reflect on key aspects related to the method, didactics and creative praxis in the teaching of linear equations in higher education. If implemented, it could be contribute to a better learning in mathematics area (i.e., solving simultaneous linear equations) that essential for future engineers. The focus of this study was to introduce an additional numerical computation package of Scilab as an alternative low-cost computing programming. In this paper, Scilab software was proposed some activities that related to the mathematical models. In this experiment, four numerical methods such as Gaussian Elimination, Gauss-Jordan, Inverse Matrix, and Lower-Upper Decomposition (LU) have been implemented. The results of this study showed that a routine or procedure in numerical methods have been created and explored by using Scilab procedures. Then, the routine of numerical method that could be as a teaching material course has exploited.
Dang, Chuangyin; Liang, Jianqing; Yang, Yang
2013-03-01
A deterministic annealing algorithm is proposed for approximating a solution of the linearly constrained nonconvex quadratic minimization problem. The algorithm is derived from applications of a Hopfield-type barrier function in dealing with box constraints and Lagrange multipliers in handling linear equality constraints, and attempts to obtain a solution of good quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the algorithm searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that the box constraints are always satisfied automatically if the step length is a number between zero and one. At each iteration, the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the algorithm converges to a stationary point of the barrier problem. Preliminary numerical results show that the algorithm seems effective and efficient. Copyright © 2012 Elsevier Ltd. All rights reserved.
Determination of Interspin Distance Distributions by cw-ESR Is a Single Linear Inverse Problem
Chiang, Yun-Wei; Zheng, Tong-Yuan; Kao, Chiao-Jung; Horng, Jia-Cherng
2009-01-01
Abstract Cw-ESR distance measurement method is extremely valuable for studying the dynamics-function relationship of biomolecules. However, extracting distance distributions from experiments has been a highly technique-demanding procedure. It has never been conclusively identified, to our knowledge, that the problems involved in the analysis are ill posed and are best solved using Tikhonov regularization. We treat the problems from a novel point of view. First of all, we identify the equations involved and uncover that they are actually two linear first-kind Fredholm integral equations. They can be combined into one single linear inverse problem and solved in a Tikhonov regularization procedure. The improvement with our new treatment is significant. Our approach is a direct and reliable mathematical method capable of providing an unambiguous solution to the ill-posed problem. It need not perform nonlinear least-squares fitting to infer a solution from noise-contaminated data and, accordingly, substantially reduces the computation time and the difficulty of analysis. Numerical tests and experimental data of polyproline II peptides with variant spin-labeled sites are provided to demonstrate our approach. The high resolution of the distance distributions obtainable with our new approach enables a detailed insight into the flexibility of dynamic structure and the identification of conformational species in solution state. PMID:19651052
A new gradient-based neural network for solving linear and quadratic programming problems.
Leung, Y; Chen, K Z; Jiao, Y C; Gao, X B; Leung, K S
2001-01-01
A new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality constraints are properly handled. The simulation results show that the proposed neural network is feasible and efficient.
Arbitrary Lagrangian-Eulerian method for non-linear problems of geomechanics
NASA Astrophysics Data System (ADS)
Nazem, M.; Carter, J. P.; Airey, D. W.
2010-06-01
In many geotechnical problems it is vital to consider the geometrical non-linearity caused by large deformation in order to capture a more realistic model of the true behaviour. The solutions so obtained should then be more accurate and reliable, which should ultimately lead to cheaper and safer design. The Arbitrary Lagrangian-Eulerian (ALE) method originated from fluid mechanics, but has now been well established for solving large deformation problems in geomechanics. This paper provides an overview of the ALE method and its challenges in tackling problems involving non-linearities due to material behaviour, large deformation, changing boundary conditions and time-dependency, including material rate effects and inertia effects in dynamic loading applications. Important aspects of ALE implementation into a finite element framework will also be discussed. This method is then employed to solve some interesting and challenging geotechnical problems such as the dynamic bearing capacity of footings on soft soils, consolidation of a soil layer under a footing, and the modelling of dynamic penetration of objects into soil layers.
Continuous-time Q-learning for infinite-horizon discounted cost linear quadratic regulator problems.
Palanisamy, Muthukumar; Modares, Hamidreza; Lewis, Frank L; Aurangzeb, Muhammad
2015-02-01
This paper presents a method of Q-learning to solve the discounted linear quadratic regulator (LQR) problem for continuous-time (CT) continuous-state systems. Most available methods in the existing literature for CT systems to solve the LQR problem generally need partial or complete knowledge of the system dynamics. Q-learning is effective for unknown dynamical systems, but has generally been well understood only for discrete-time systems. The contribution of this paper is to present a Q-learning methodology for CT systems which solves the LQR problem without having any knowledge of the system dynamics. A natural and rigorous justified parameterization of the Q-function is given in terms of the state, the control input, and its derivatives. This parameterization allows the implementation of an online Q-learning algorithm for CT systems. The simulation results supporting the theoretical development are also presented.
A Linear Time Algorithm for the Minimum Spanning Caterpillar Problem for Bounded Treewidth Graphs
NASA Astrophysics Data System (ADS)
Dinneen, Michael J.; Khosravani, Masoud
We consider the Minimum Spanning Caterpillar Problem (MSCP) in a graph where each edge has two costs, spine (path) cost and leaf cost, depending on whether it is used as a spine or a leaf edge. The goal is to find a spanning caterpillar in which the sum of its edge costs is the minimum. We show that the problem has a linear time algorithm when a tree decomposition of the graph is given as part of the input. Despite the fast growing constant factor of the time complexity of our algorithm, it is still practical and efficient for some classes of graphs, such as outerplanar, series-parallel (K 4 minor-free), and Halin graphs. We also briefly explain how one can modify our algorithm to solve the Minimum Spanning Ring Star and the Dual Cost Minimum Spanning Tree Problems.
Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems
Choi, Sou-Cheng T.; Saunders, Michael A.
2014-01-01
We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP. PMID:25328255
Zhidkov, P E
2000-04-30
For a non-linear eigenvalue problem similar to a linear Sturm-Liouville problem the properties of the spectrum and the eigenfunctions are analysed. The system of eigenfunctions is shown to be a Riesz basis in L{sub 2}.
IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1994-01-01
IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.
IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1994-01-01
IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.
New algorithms for linear k-matroid intersection and matroid k-parity problems
Barvinok, A.
1994-12-31
We present algorithms for the k-Matroid Intersection Problem and for the Matroid k-Parity Problem when the matroids are represented over the field of rational numbers and k > 2. The computational complexity of the algorithms is linear in the cardinality n and singly exponential in the rank r of the matroids. Thus if n grows faster than a linear function in r (this is the case for most combinatorial applications) then the algorithms are asymptotically faster than exhaustive search and provide the best known worst-case complexity. If r = O(log n) then the algorithms have polynomial-time complexity. As an application, we prove that for any fixed k one can determine in polynomial time whether there exist O(log n) pairwise disjoint edges in a given uniform k-hypergraph on n vertices. Our approach extends known methods of linear algebra developed earlier for the case k = 2. Using the generalized Binet-Cauchy formula and its analogue for the Pfaffian we reduce in O(nr{sup 2k}) time the k-Matroid intersection Problem to computation of the hyperdeterminant of a 2k-dimensional r x ... x r tensor and the Matroid k-Parity Problem to computation of the hyperpfaffian of a 2k-dimensional 2r x ... 2r tensor. We use dynamic programming to compute these invariants of tensors using O(r{sup 2k}4{sup rk}) and O(r{sup 2k+1}4{sup r}) arithmetic operations correspondingly.
Boundary parametric approximation to the linearized scalar potential magnetostatic field problem
Bramble, J.H.; Pasciak, J.E.
1984-01-01
We consider the linearized scalar potential formulation of the magnetostatic field problem in this paper. Our approach involves a reformulation of the continuous problem as a parametric boundary problem. By the introduction of a spherical interface and the use of spherical harmonics, the infinite boundary conditions can also be satisfied in the parametric framework. That is, the field in the exterior of a sphere is expanded in a harmonic series of eigenfunctions for the exterior harmonic problem. The approach is essentially a finite element method coupled with a spectral method via a boundary parametric procedure. The reformulated problem is discretized by finite element techniques which lead to a discrete parametric problem which can be solved by well conditioned iteration involving only the solution of decoupled Neumann type elliptic finite element systems and L/sup 2/ projection onto subspaces of spherical harmonics. Error and stability estimates given show exponential convergence in the degree of the spherical harmonics and optimal order convergence with respect to the finite element approximation for the resulting fields in L/sup 2/. 24 references.
Kew, William; Mitchell, John B O
2015-09-01
The application of Machine Learning to cheminformatics is a large and active field of research, but there exist few papers which discuss whether ensembles of different Machine Learning methods can improve upon the performance of their component methodologies. Here we investigated a variety of methods, including kernel-based, tree, linear, neural networks, and both greedy and linear ensemble methods. These were all tested against a standardised methodology for regression with data relevant to the pharmaceutical development process. This investigation focused on QSPR problems within drug-like chemical space. We aimed to investigate which methods perform best, and how the 'wisdom of crowds' principle can be applied to ensemble predictors. It was found that no single method performs best for all problems, but that a dynamic, well-structured ensemble predictor would perform very well across the board, usually providing an improvement in performance over the best single method. Its use of weighting factors allows the greedy ensemble to acquire a bigger contribution from the better performing models, and this helps the greedy ensemble generally to outperform the simpler linear ensemble. Choice of data preprocessing methodology was found to be crucial to performance of each method too. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Acceleration of multiple solution of a boundary value problem involving a linear algebraic system
NASA Astrophysics Data System (ADS)
Gazizov, Talgat R.; Kuksenko, Sergey P.; Surovtsev, Roman S.
2016-06-01
Multiple solution of a boundary value problem that involves a linear algebraic system is considered. New approach to acceleration of the solution is proposed. The approach uses the structure of the linear system matrix. Particularly, location of entries in the right columns and low rows of the matrix, which undergo variation due to the computing in the range of parameters, is used to apply block LU decomposition. Application of the approach is considered on the example of multiple computing of the capacitance matrix by method of moments used in numerical electromagnetics. Expressions for analytic estimation of the acceleration are presented. Results of the numerical experiments for solution of 100 linear systems with matrix orders of 1000, 2000, 3000 and different relations of variated and constant entries of the matrix show that block LU decomposition can be effective for multiple solution of linear systems. The speed up compared to pointwise LU factorization increases (up to 15) for larger number and order of considered systems with lower number of variated entries.
Self-complementarity of messenger RNA's of periodic proteins
NASA Technical Reports Server (NTRS)
Ycas, M.
1973-01-01
It is shown that the mRNA's of three periodic proteins, collagen, keratin and freezing point depressing glycoproteins show a marked degree of self-complementarity. The possible origin of this self-complementarity is discussed.
Self-complementarity of messenger RNA's of periodic proteins
NASA Technical Reports Server (NTRS)
Ycas, M.
1973-01-01
It is shown that the mRNA's of three periodic proteins, collagen, keratin and freezing point depressing glycoproteins show a marked degree of self-complementarity. The possible origin of this self-complementarity is discussed.
Madrigal-González, Jaime; Ruiz-Benito, Paloma; Ratcliffe, Sophia; Calatayud, Joaquín; Kändler, Gerald; Lehtonen, Aleksi; Dahlgren, Jonas; Wirth, Christian; Zavala, Miguel A.
2016-01-01
Neglecting tree size and stand structure dynamics might bias the interpretation of the diversity-productivity relationship in forests. Here we show evidence that complementarity is contingent on tree size across large-scale climatic gradients in Europe. We compiled growth data of the 14 most dominant tree species in 32,628 permanent plots covering boreal, temperate and Mediterranean forest biomes. Niche complementarity is expected to result in significant growth increments of trees surrounded by a larger proportion of functionally dissimilar neighbours. Functional dissimilarity at the tree level was assessed using four functional types: i.e. broad-leaved deciduous, broad-leaved evergreen, needle-leaved deciduous and needle-leaved evergreen. Using Linear Mixed Models we show that, complementarity effects depend on tree size along an energy availability gradient across Europe. Specifically: (i) complementarity effects at low and intermediate positions of the gradient (coldest-temperate areas) were stronger for small than for large trees; (ii) in contrast, at the upper end of the gradient (warmer regions), complementarity is more widespread in larger than smaller trees, which in turn showed negative growth responses to increased functional dissimilarity. Our findings suggest that the outcome of species mixing on stand productivity might critically depend on individual size distribution structure along gradients of environmental variation. PMID:27571971
NASA Astrophysics Data System (ADS)
Madrigal-González, Jaime; Ruiz-Benito, Paloma; Ratcliffe, Sophia; Calatayud, Joaquín; Kändler, Gerald; Lehtonen, Aleksi; Dahlgren, Jonas; Wirth, Christian; Zavala, Miguel A.
2016-08-01
Neglecting tree size and stand structure dynamics might bias the interpretation of the diversity-productivity relationship in forests. Here we show evidence that complementarity is contingent on tree size across large-scale climatic gradients in Europe. We compiled growth data of the 14 most dominant tree species in 32,628 permanent plots covering boreal, temperate and Mediterranean forest biomes. Niche complementarity is expected to result in significant growth increments of trees surrounded by a larger proportion of functionally dissimilar neighbours. Functional dissimilarity at the tree level was assessed using four functional types: i.e. broad-leaved deciduous, broad-leaved evergreen, needle-leaved deciduous and needle-leaved evergreen. Using Linear Mixed Models we show that, complementarity effects depend on tree size along an energy availability gradient across Europe. Specifically: (i) complementarity effects at low and intermediate positions of the gradient (coldest-temperate areas) were stronger for small than for large trees; (ii) in contrast, at the upper end of the gradient (warmer regions), complementarity is more widespread in larger than smaller trees, which in turn showed negative growth responses to increased functional dissimilarity. Our findings suggest that the outcome of species mixing on stand productivity might critically depend on individual size distribution structure along gradients of environmental variation.
Madrigal-González, Jaime; Ruiz-Benito, Paloma; Ratcliffe, Sophia; Calatayud, Joaquín; Kändler, Gerald; Lehtonen, Aleksi; Dahlgren, Jonas; Wirth, Christian; Zavala, Miguel A
2016-08-30
Neglecting tree size and stand structure dynamics might bias the interpretation of the diversity-productivity relationship in forests. Here we show evidence that complementarity is contingent on tree size across large-scale climatic gradients in Europe. We compiled growth data of the 14 most dominant tree species in 32,628 permanent plots covering boreal, temperate and Mediterranean forest biomes. Niche complementarity is expected to result in significant growth increments of trees surrounded by a larger proportion of functionally dissimilar neighbours. Functional dissimilarity at the tree level was assessed using four functional types: i.e. broad-leaved deciduous, broad-leaved evergreen, needle-leaved deciduous and needle-leaved evergreen. Using Linear Mixed Models we show that, complementarity effects depend on tree size along an energy availability gradient across Europe. Specifically: (i) complementarity effects at low and intermediate positions of the gradient (coldest-temperate areas) were stronger for small than for large trees; (ii) in contrast, at the upper end of the gradient (warmer regions), complementarity is more widespread in larger than smaller trees, which in turn showed negative growth responses to increased functional dissimilarity. Our findings suggest that the outcome of species mixing on stand productivity might critically depend on individual size distribution structure along gradients of environmental variation.
NASA Astrophysics Data System (ADS)
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna
2016-05-01
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M; Derocher, Andrew E; Lewis, Mark A; Jonsen, Ian D; Mills Flemming, Joanna
2016-05-25
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna
2016-01-01
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results. PMID:27220686
NASA Astrophysics Data System (ADS)
Frick, K.; Grasmair, M.
2012-10-01
We study the application of the augmented Lagrangian method to the solution of linear ill-posed problems. Previously, linear convergence rates with respect to the Bregman distance have been derived under the classical assumption of a standard source condition. Using the method of variational inequalities, we extend these results in this paper to convergence rates of lower order, both for the case of an a priori parameter choice and an a posteriori choice based on Morozov’s discrepancy principle. In addition, our approach allows the derivation of convergence rates with respect to distance measures different from the Bregman distance. As a particular application, we consider sparsity promoting regularization, where we derive a range of convergence rates with respect to the norm under the assumption of restricted injectivity in conjunction with generalized source conditions of Hölder type.
Resampling versus repair in evolution strategies applied to a constrained linear problem.
Arnold, Dirk V
2013-01-01
We study the behaviour of multi-recombination evolution strategies for the problem of maximising a linear function with a single linear constraint. Two variants of the algorithm are considered: a strategy that resamples infeasible candidate solutions and one that applies a simple repair mechanism. Integral expressions that describe the strategies' one-generation behaviour are derived and used in a simple zeroth order model for the steady state attained when operating with constant step size. Applied to the analysis of cumulative step size adaptation, the approach provides an intuitive explanation for the qualitative difference in the algorithm variants' behaviour. The findings have implications for the design of constraint handling techniques to be used in connection with cumulative step size adaptation.
Robustness in linear quadratic feedback design with application to an aircraft control problem
NASA Technical Reports Server (NTRS)
Patel, R. V.; Sridhar, B.; Toda, M.
1977-01-01
Some new results concerning robustness and asymptotic properties of error bounds of a linear quadratic feedback design are applied to an aircraft control problem. An autopilot for the flare control of the Augmentor Wing Jet STOL Research Aircraft (AWJSRA) is designed based on Linear Quadratic (LQ) theory and the results developed in this paper. The variation of the error bounds to changes in the weighting matrices in the LQ design is studied by computer simulations, and appropriate weighting matrices are chosen to obtain a reasonable error bound for variations in the system matrix and at the same time meet the practical constraints for the flare maneuver of the AWJSRA. Results from the computer simulation of a satisfactory autopilot design for the flare control of the AWJSRA are presented.
A method of fast, sequential experimental design for linearized geophysical inverse problems
NASA Astrophysics Data System (ADS)
Coles, Darrell A.; Morgan, Frank Dale
2009-07-01
An algorithm for linear(ized) experimental design is developed for a determinant-based design objective function. This objective function is common in design theory and is used to design experiments that minimize the model entropy, a measure of posterior model uncertainty. Of primary significance in design problems is computational expediency. Several earlier papers have focused attention on posing design objective functions and opted to use global search methods for finding the critical points of these functions, but these algorithms are too slow to be practical. The proposed technique is distinguished primarily for its computational efficiency, which derives partly from a greedy optimization approach, termed sequential design. Computational efficiency is further enhanced through formulae for updating determinants and matrix inverses without need for direct calculation. The design approach is orders of magnitude faster than a genetic algorithm applied to the same design problem. However, greedy optimization often trades global optimality for increased computational speed; the ramifications of this tradeoff are discussed. The design methodology is demonstrated on a simple, single-borehole DC electrical resistivity problem. Designed surveys are compared with random and standard surveys, both with and without prior information. All surveys were compared with respect to a `relative quality' measure, the post-inversion model per cent rms error. The issue of design for inherently ill-posed inverse problems is considered and an approach for circumventing such problems is proposed. The design algorithm is also applied in an adaptive manner, with excellent results suggesting that smart, compact experiments can be designed in real time.
A novel approach based on preference-based index for interval bilevel linear programming problem.
Ren, Aihong; Wang, Yuping; Xue, Xingsi
2017-01-01
This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation [Formula: see text]. Furthermore, the concept of a preference δ-optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.
Using Perturbed QR Factorizations To Solve Linear Least-Squares Problems
Avron, Haim; Ng, Esmond G.; Toledo, Sivan
2008-03-21
We propose and analyze a new tool to help solve sparse linear least-squares problems min{sub x} {parallel}Ax-b{parallel}{sub 2}. Our method is based on a sparse QR factorization of a low-rank perturbation {cflx A} of A. More precisely, we show that the R factor of {cflx A} is an effective preconditioner for the least-squares problem min{sub x} {parallel}Ax-b{parallel}{sub 2}, when solved using LSQR. We propose applications for the new technique. When A is rank deficient we can add rows to ensure that the preconditioner is well-conditioned without column pivoting. When A is sparse except for a few dense rows we can drop these dense rows from A to obtain {cflx A}. Another application is solving an updated or downdated problem. If R is a good preconditioner for the original problem A, it is a good preconditioner for the updated/downdated problem {cflx A}. We can also solve what-if scenarios, where we want to find the solution if a column of the original matrix is changed/removed. We present a spectral theory that analyzes the generalized spectrum of the pencil (A*A,R*R) and analyze the applications.
Fredholm alternative for periodic-Dirichlet problems for linear hyperbolic systems
NASA Astrophysics Data System (ADS)
Kmit, Irina; Recke, Lutz
2007-11-01
This paper concerns hyperbolic systems of two linear first-order PDEs in one space dimension with periodicity conditions in time and reflection boundary conditions in space. The coefficients of the PDEs are supposed to be time independent, but allowed to be discontinuous with respect to the space variable. We construct two scales of Banach spaces (for the solutions and for the right-hand sides of the equations, respectively) such that the problem can be modeled by means of Fredholm operators of index zero between corresponding spaces of the two scales.
NASA Astrophysics Data System (ADS)
Tang, Yao-Zong; Li, Xiao-Lin
2017-03-01
We first give a stabilized improved moving least squares (IMLS) approximation, which has better computational stability and precision than the IMLS approximation. Then, analysis of the improved element-free Galerkin method is provided theoretically for both linear and nonlinear elliptic boundary value problems. Finally, numerical examples are given to verify the theoretical analysis. Project supported by the National Natural Science Foundation of China (Grant No. 11471063), the Chongqing Research Program of Basic Research and Frontier Technology, China (Grant No. cstc2015jcyjBX0083), and the Educational Commission Foundation of Chongqing City, China (Grant No. KJ1600330).
On the classical solution to the linear-constrained minimum energy problem
NASA Astrophysics Data System (ADS)
Boissaux, Marc; Schiltz, Jang
2012-02-01
Minimum energy problems involving linear systems with quadratic performance criteria are classical in optimal control theory. The case where controls are constrained is discussed in Athans and Falb (1966) [Athans, M. and Falb, P.L. (1966), Optimal Control: An Introduction to the Theory and Its Applications, New York: McGraw-Hill Book Co.] who obtain a componentwise optimal control expression involving a saturation function expression. We show why the given expression is not generally optimal in the case where the dimension of the control is greater than one and provide a numerical counterexample.
NASA Astrophysics Data System (ADS)
Wu, Jiming; Gao, Zhiming; Dai, Zihuan
2012-08-01
In this paper a stabilized discretization scheme for the heterogeneous and anisotropic diffusion problems is proposed on general, possibly nonconforming polygonal meshes. The unknowns are the values at the cell center and the scheme relies on linearity-preserving criterion and the use of the so-called harmonic averaging points located at the interface of heterogeneity. The stability result and error estimate both in H1 norm are obtained under quite general and standard assumptions on polygonal meshes. The experiment results on a number of different meshes show that the scheme maintains optimal convergence rates in both L2 and H1 norms.
A stabilized complementarity formulation for nonlinear analysis of 3D bimodular materials
NASA Astrophysics Data System (ADS)
Zhang, L.; Zhang, H. W.; Wu, J.; Yan, B.
2016-06-01
Bi-modulus materials with different mechanical responses in tension and compression are often found in civil, composite, and biological engineering. Numerical analysis of bimodular materials is strongly nonlinear and convergence is usually a problem for traditional iterative schemes. This paper aims to develop a stabilized computational method for nonlinear analysis of 3D bimodular materials. Based on the parametric variational principle, a unified constitutive equation of 3D bimodular materials is proposed, which allows the eight principal stress states to be indicated by three parametric variables introduced in the principal stress directions. The original problem is transformed into a standard linear complementarity problem (LCP) by the parametric virtual work principle and a quadratic programming algorithm is developed by solving the LCP with the classic Lemke's algorithm. Update of elasticity and stiffness matrices is avoided and, thus, the proposed algorithm shows an excellent convergence behavior compared with traditional iterative schemes. Numerical examples show that the proposed method is valid and can accurately analyze mechanical responses of 3D bimodular materials. Also, stability of the algorithm is greatly improved.
A Vector Study of Linearized Supersonic Flow Applications to Nonplanar Problems
NASA Technical Reports Server (NTRS)
Martin, John C
1953-01-01
A vector study of the partial-differential equation of steady linearized supersonic flow is presented. General expressions which relate the velocity potential in the stream to the conditions on the disturbing surfaces, are derived. In connection with these general expressions the concept of the finite part of an integral is discussed. A discussion of problems dealing with planar bodies is given and the conditions for the solution to be unique are investigated. Problems concerning nonplanar systems are investigated, and methods are derived for the solution of some simple nonplanar bodies. The surface pressure distribution and the damping in roll are found for rolling tails consisting of four, six, and eight rectangular fins for the Mach number range where the region of interference between adjacent fins does not affect the fin tips.
NASA Technical Reports Server (NTRS)
Wiggins, R. A.
1972-01-01
The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.
Xia, Youshen; Sun, Changyin; Zheng, Wei Xing
2012-05-01
There is growing interest in solving linear L1 estimation problems for sparsity of the solution and robustness against non-Gaussian noise. This paper proposes a discrete-time neural network which can calculate large linear L1 estimation problems fast. The proposed neural network has a fixed computational step length and is proved to be globally convergent to an optimal solution. Then, the proposed neural network is efficiently applied to image restoration. Numerical results show that the proposed neural network is not only efficient in solving degenerate problems resulting from the nonunique solutions of the linear L1 estimation problems but also needs much less computational time than the related algorithms in solving both linear L1 estimation and image restoration problems.
A linear model approach for ultrasonic inverse problems with attenuation and dispersion.
Carcreff, Ewen; Bourguignon, Sébastien; Idier, Jérôme; Simon, Laurent
2014-07-01
Ultrasonic inverse problems such as spike train deconvolution, synthetic aperture focusing, or tomography attempt to reconstruct spatial properties of an object (discontinuities, delaminations, flaws, etc.) from noisy and incomplete measurements. They require an accurate description of the data acquisition process. Dealing with frequency-dependent attenuation and dispersion is therefore crucial because both phenomena modify the wave shape as the travel distance increases. In an inversion context, this paper proposes to exploit a linear model of ultrasonic data taking into account attenuation and dispersion. The propagation distance is discretized to build a finite set of radiation impulse responses. Attenuation is modeled with a frequency power law and then dispersion is computed to yield physically consistent responses. Using experimental data acquired from attenuative materials, this model outperforms the standard attenuation-free model and other models of the literature. Because of model linearity, robust estimation methods can be implemented. When matched filtering is employed for single echo detection, the model that we propose yields precise estimation of the attenuation coefficient and of the sound velocity. A thickness estimation problem is also addressed through spike deconvolution, for which the proposed model also achieves accurate results.
Linear stability of the Couette flow of a vibrationally excited gas. 2. viscous problem
NASA Astrophysics Data System (ADS)
Grigor'ev, Yu. N.; Ershov, I. V.
2016-03-01
Based on the linear theory, stability of viscous disturbances in a supersonic plane Couette flow of a vibrationally excited gas described by a system of linearized equations of two-temperature gas dynamics including shear and bulk viscosity is studied. It is demonstrated that two sets are identified in the spectrum of the problem of stability of plane waves, similar to the case of a perfect gas. One set consists of viscous acoustic modes, which asymptotically converge to even and odd inviscid acoustic modes at high Reynolds numbers. The eigenvalues from the other set have no asymptotic relationship with the inviscid problem and are characterized by large damping decrements. Two most unstable viscous acoustic modes (I and II) are identified; the limits of these modes were considered previously in the inviscid approximation. It is shown that there are domains in the space of parameters for both modes, where the presence of viscosity induces appreciable destabilization of the flow. Moreover, the growth rates of disturbances are appreciably greater than the corresponding values for the inviscid flow, while thermal excitation in the entire considered range of parameters increases the stability of the viscous flow. For a vibrationally excited gas, the critical Reynolds number as a function of the thermal nonequilibrium degree is found to be greater by 12% than for a perfect gas.
Aksenov, V. L.; Kiselev, M. A.
2010-12-15
General problems of the complementarity of different physical methods and specific features of the interaction between neutron and matter and neutron diffraction with respect to the time of flight are discussed. The results of studying the kinetics of structural changes in lipid membranes under hydration and self-assembly of the lipid bilayer in the presence of a detergent are reported. The possibilities of the complementarity of neutron diffraction and X-ray synchrotron radiation and developing a free-electron laser are noted.
NASA Astrophysics Data System (ADS)
Zhou, Qinglong; Long, Yiming
2017-04-01
In this paper, we consider the elliptic collinear solutions of the classical n-body problem, where the n bodies always stay on a straight line, and each of them moves on its own elliptic orbit with the same eccentricity. Such a motion is called an elliptic Euler-Moulton collinear solution. Here we prove that the corresponding linearized Hamiltonian system at such an elliptic Euler-Moulton collinear solution of n-bodies splits into (n-1) independent linear Hamiltonian systems, the first one is the linearized Hamiltonian system of the Kepler 2-body problem at Kepler elliptic orbit, and each of the other (n-2) systems is the essential part of the linearized Hamiltonian system at an elliptic Euler collinear solution of a 3-body problem whose mass parameter is modified. Then the linear stability of such a solution in the n-body problem is reduced to those of the corresponding elliptic Euler collinear solutions of the 3-body problems, which for example then can be further understood using numerical results of Martínez et al. on 3-body Euler solutions in 2004-2006. As an example, we carry out the detailed derivation of the linear stability for an elliptic Euler-Moulton solution of the 4-body problem with two small masses in the middle.
Complementarity and entanglement in bipartite qudit systems
Jakob, Matthias; Bergou, Janos A.
2007-11-15
We consider complementarity in a bipartite quantum system of arbitrary dimensions. Single-partite and bipartite properties turn out as mutually exclusive quantities. The single-partite properties can be related to a generalized predictability and visibility which compose two complementary realities for themselves. These properties combined become mutually exclusive to the genuine quantum mechanical bipartite correlations of the system which can be quantified with the generalized I concurrence that defines a proper entanglement measure. Consequently, the complementary relation quantifies entanglement in the bipartite system. The concept of complementarity determines entanglement as a property which mutually excludes any single-partite reality. As an application, we provide a proper definition of distinguishability in an n-port interferometer.
Skill complementarity enhances heterophily in collaboration networks.
Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H Eugene
2016-01-08
Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems.
Skill complementarity enhances heterophily in collaboration networks
Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene
2016-01-01
Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems. PMID:26743687
Strong gravitational lensing and dark energy complementarity
Linder, Eric V.
2004-01-21
In the search for the nature of dark energy most cosmological probes measure simple functions of the expansion rate. While powerful, these all involve roughly the same dependence on the dark energy equation of state parameters, with anticorrelation between its present value w{sub 0} and time variation w{sub a}. Quantities that have instead positive correlation and so a sensitivity direction largely orthogonal to, e.g., distance probes offer the hope of achieving tight constraints through complementarity. Such quantities are found in strong gravitational lensing observations of image separations and time delays. While degeneracy between cosmological parameters prevents full complementarity, strong lensing measurements to 1 percent accuracy can improve equation of state characterization by 15-50 percent. Next generation surveys should provide data on roughly 105 lens systems, though systematic errors will remain challenging.
Skill complementarity enhances heterophily in collaboration networks
NASA Astrophysics Data System (ADS)
Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene
2016-01-01
Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems.
Linear stability analysis in the numerical solution of initial value problems
NASA Astrophysics Data System (ADS)
van Dorsselaer, J. L. M.; Kraaijevanger, J. F. B. M.; Spijker, M. N.
This article addresses the general problem of establishing upper bounds for the norms of the nth powers of square matrices. The focus is on upper bounds that grow only moderately (or stay constant) where n, or the order of the matrices, increases. The so-called resolvant condition, occuring in the famous Kreiss matrix theorem, is a classical tool for deriving such bounds.Recently the classical upper bounds known to be valid under Kreiss's resolvant condition have been improved. Moreover, generalizations of this resolvant condition have been considered so as to widen the range of applications. The main purpose of this article is to review and extend some of these new developments.The upper bounds for the powers of matrices discussed in this article are intimately connected with the stability analysis of numerical processes for solving initial(-boundary) value problems in ordinary and partial linear differential equations. The article highlights this connection.The article concludes with numerical illustrations in the solution of a simple initial-boundary value problem for a partial differential equation.
Carey, G.F.; Young, D.M.
1993-12-31
The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
Linear problem of the shock wave disturbance in a non-classical case
NASA Astrophysics Data System (ADS)
Semenko, Evgeny V.
2017-06-01
A linear problem of the shock wave disturbance for a special (non-classical) case, where both pre-shock and post-shock flows are subsonic, is considered. The phase transition for the van der Waals gas is an example of this problem. Isentropic solutions are constructed. In addition, the stability of the problem is investigated and the known result is approved: the only neutral stability case occurs here. A strictly algebraic representation of the solution in the plane of the Fourier transform is obtained. This representation allows the solution to be studied both analytically and numerically. In this way, any solution can be decomposed into a sum of acoustic and vorticity waves and into a sum of initial (generated by initial perturbations), transmitted (through the shock) and reflected (from the shock) waves. Thus, the wave incidence/refraction/reflection is investigated. A principal difference of the refraction/reflection from the classical case is found, namely, the waves generated by initial pre-shock perturbations not only pass through the shock (i.e., generate post-shock transmitted waves) but also are reflected from it (i.e., generate pre-shock reflected waves). In turn, the waves generated by the initial post-shock perturbation are not only reflected from the shock (generate post-shock reflected waves) but also pass through it (generate pre-shock transmitted waves).
An improved exploratory search technique for pure integer linear programming problems
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1990-01-01
The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.
NASA Astrophysics Data System (ADS)
Dotti, Gustavo; Gleiser, Reinaldo J.
2009-11-01
The coupled equations for the scalar modes of the linearized Einstein equations around Schwarzschild's spacetime were reduced by Zerilli to a (1+1) wave equation \\partial ^2 \\Psi _z / \\partial t^2 + {\\cal H} \\Psi _z =0 , where {\\cal H} = -\\partial ^2 / \\partial x^2 + V(x) is the Zerilli 'Hamiltonian' and x is the tortoise radial coordinate. From its definition, for smooth metric perturbations the field Ψz is singular at rs = -6M/(ell - 1)(ell +2), with ell being the mode harmonic number. The equation Ψz obeys is also singular, since V has a second-order pole at rs. This is irrelevant to the black hole exterior stability problem, where r > 2M > 0, and rs < 0, but it introduces a non-trivial problem in the naked singular case where M < 0, then rs > 0, and the singularity appears in the relevant range of r (0 < r < ∞). We solve this problem by developing a new approach to the evolution of the even mode, based on a new gauge invariant function, \\hat{\\Psi} , that is a regular function of the metric perturbation for any value of M. The relation of \\hat{\\Psi} to Ψz is provided by an intertwiner operator. The spatial pieces of the (1 + 1) wave equations that \\hat{\\Psi} and Ψz obey are related as a supersymmetric pair of quantum Hamiltonians {\\cal H} and \\hat{\\cal H} . For M<0, \\hat{\\cal H} has a regular potential and a unique self-adjoint extension in a domain {\\cal D} defined by a physically motivated boundary condition at r = 0. This allows us to address the issue of evolution of gravitational perturbations in this non-globally hyperbolic background. This formulation is used to complete the proof of the linear instability of the Schwarzschild naked singularity, by showing that a previously found unstable mode belongs to a complete basis of \\hat{\\cal H} in {\\cal D} , and thus is excitable by generic initial data. This is further illustrated by numerically solving the linearized equations for suitably chosen initial data.
Low energy description of quantum gravity and complementarity
NASA Astrophysics Data System (ADS)
Nomura, Yasunori; Varela, Jaime; Weinberg, Sean J.
2014-06-01
We consider a framework in which low energy dynamics of quantum gravity is described preserving locality, and yet taking into account the effects that are not captured by the naive global spacetime picture, e.g. those associated with black hole complementarity. Our framework employs a "special relativistic" description of gravity; specifically, gravity is treated as a force measured by the observer tied to the coordinate system associated with a freely falling local Lorentz frame. We identify, in simple cases, regions of spacetime in which low energy local descriptions are applicable as viewed from the freely falling frame; in particular, we identify a surface called the gravitational observer horizon on which the local proper acceleration measured in the observer's coordinates becomes the cutoff (string) scale. This allows for separating between the "low-energy" local physics and "trans-Planckian" intrinsically quantum gravitational (stringy) physics, and allows for developing physical pictures of the origins of various effects. We explore the structure of the Hilbert space in which the proposed scheme is realized in a simple manner, and classify its elements according to certain horizons they possess. We also discuss implications of our framework on the firewall problem. We conjecture that the complementarity picture may persist due to properties of trans-Planckian physics.
Spoor, C F; Zonneveld, F W; Macho, G A
1993-08-01
This paper explores the potential of high-resolution computed tomography (CT) as a morphometric tool in paleoanthropology. The accuracy of linear measurements of enamel thickness and cortical bone thickness taken from CT scans is evaluated by making comparison with measurements taken directly from physical sections. The measurements of cortical bone are taken on extant and fossil specimens with and without attached matrix, and the dental specimens studied include a sample of 12 extant human molars. Local CT numbers (representing X-ray attenuation) are used to determine the exact position of the boundaries of a structure. Using this technique most studied dimensions, including four of human molar enamel thickness, could be obtained from CT scans with a maximum error range of +/- 0.1 mm. The limitations of the method are discussed with special reference to problems associated with highly mineralized fossils.
A review of vector convergence acceleration methods, with applications to linear algebra problems
NASA Astrophysics Data System (ADS)
Brezinski, C.; Redivo-Zaglia, M.
In this article, in a few pages, we will try to give an idea of convergence acceleration methods and extrapolation procedures for vector sequences, and to present some applications to linear algebra problems and to the treatment of the Gibbs phenomenon for Fourier series in order to show their effectiveness. The interested reader is referred to the literature for more details. In the bibliography, due to space limitation, we will only give the more recent items, and, for older ones, we refer to Brezinski and Redivo-Zaglia, Extrapolation methods. (Extrapolation Methods. Theory and Practice, North-Holland, 1991). This book also contains, on a magnetic support, a library (in Fortran 77 language) for convergence acceleration algorithms and extrapolation methods.
First-order system least squares for the pure traction problem in planar linear elasticity
Cai, Z.; Manteuffel, T.; McCormick, S.; Parter, S.
1996-12-31
This talk will develop two first-order system least squares (FOSLS) approaches for the solution of the pure traction problem in planar linear elasticity. Both are two-stage algorithms that first solve for the gradients of displacement, then for the displacement itself. One approach, which uses L{sup 2} norms to define the FOSLS functional, is shown under certain H{sup 2} regularity assumptions to admit optimal H{sup 1}-like performance for standard finite element discretization and standard multigrid solution methods that is uniform in the Poisson ratio for all variables. The second approach, which is based on H{sup -1} norms, is shown under general assumptions to admit optimal uniform performance for displacement flux in an L{sup 2} norm and for displacement in an H{sup 1} norm. These methods do not degrade as other methods generally do when the material properties approach the incompressible limit.
Solving the Linear Balance Equation on the Globe as a Generalized Inverse Problem
NASA Technical Reports Server (NTRS)
Lu, Huei-Iin; Robertson, Franklin R.
1999-01-01
A generalized (pseudo) inverse technique was developed to facilitate a better understanding of the numerical effects of tropical singularities inherent in the spectral linear balance equation (LBE). Depending upon the truncation, various levels of determinancy are manifest. The traditional fully-determined (FD) systems give rise to a strong response, while the under-determined (UD) systems yield a weak response to the tropical singularities. The over-determined (OD) systems result in a modest response and a large residual in the tropics. The FD and OD systems can be alternatively solved by the iterative method. Differences in the solutions of an UD system exist between the inverse technique and the iterative method owing to the non- uniqueness of the problem. A realistic balanced wind was obtained by solving the principal components of the spectral LBE in terms of vorticity in an intermediate resolution. Improved solutions were achieved by including the singular-component solutions which best fit the observed wind data.
Constraints to solve parallelogram grid problems in 2D non separable linear canonical transform
NASA Astrophysics Data System (ADS)
Zhao, Liang; Healy, John J.; Muniraj, Inbarasan; Cui, Xiao-Guang; Malallah, Ra'ed; Ryle, James P.; Sheridan, John T.
2017-05-01
The 2D non-separable linear canonical transform (2D-NS-LCT) can model a range of various paraxial optical systems. Digital algorithms to evaluate the 2D-NS-LCTs are important in modeling the light field propagations and also of interest in many digital signal processing applications. In [Zhao 14] we have reported that a given 2D input image with rectangular shape/boundary, in general, results in a parallelogram output sampling grid (generally in an affine coordinates rather than in a Cartesian coordinates) thus limiting the further calculations, e.g. inverse transform. One possible solution is to use the interpolation techniques; however, it reduces the speed and accuracy of the numerical approximations. To alleviate this problem, in this paper, some constraints are derived under which the output samples are located in the Cartesian coordinates. Therefore, no interpolation operation is required and thus the calculation error can be significantly eliminated.
Solving the Linear Balance Equation on the Globe as a Generalized Inverse Problem
NASA Technical Reports Server (NTRS)
Lu, Huei-Iin; Robertson, Franklin R.
1999-01-01
A generalized (pseudo) inverse technique was developed to facilitate a better understanding of the numerical effects of tropical singularities inherent in the spectral linear balance equation (LBE). Depending upon the truncation, various levels of determinancy are manifest. The traditional fully-determined (FD) systems give rise to a strong response, while the under-determined (UD) systems yield a weak response to the tropical singularities. The over-determined (OD) systems result in a modest response and a large residual in the tropics. The FD and OD systems can be alternatively solved by the iterative method. Differences in the solutions of an UD system exist between the inverse technique and the iterative method owing to the non- uniqueness of the problem. A realistic balanced wind was obtained by solving the principal components of the spectral LBE in terms of vorticity in an intermediate resolution. Improved solutions were achieved by including the singular-component solutions which best fit the observed wind data.
The efficient solution of the (quietly constrained) noisy, linear regulator problem
NASA Astrophysics Data System (ADS)
Gregory, John; Hughes, H. R.
2007-09-01
In a previous paper we gave a new, natural extension of the calculus of variations/optimal control theory to a (strong) stochastic setting. We now extend the theory of this most fundamental chapter of optimal control in several directions. Most importantly we present a new method of stochastic control, adding Brownian motion which makes the problem "noisy." Secondly, we show how to obtain efficient solutions: direct stochastic integration for simpler problems and/or efficient and accurate numerical methods with a global a priori error of O(h3/2) for more complex problems. Finally, we include "quiet" constraints, i.e. deterministic relationships between the state and control variables. Our theory and results can be immediately restricted to the non "noisy" (deterministic) case yielding efficient, numerical solution techniques and an a priori error of O(h2)E In this event we obtain the most efficient method of solving the (constrained) classical Linear Regulator Problem. Our methods are different from the standard theory of stochastic control. In some cases the solutions coincide or at least are closely related. However, our methods have many advantages including those mentioned above. In addition, our methods more directly follow the motivation and theory of classical (deterministic) optimization which is perhaps the most important area of physical and engineering science. Our results follow from related ideas in the deterministic theory. Thus, our approximation methods follow by guessing at an algorithm, but the proof of global convergence uses stochastic techniques because our trajectories are not differentiable. Along these lines, a general drift term in the trajectory equation is properly viewed as an added constraint and extends ideas given in the deterministic case by the first author.
A linear stability analysis for nonlinear, grey, thermal radiative transfer problems
Wollaber, Allan B.; Larsen, Edward W.
2011-02-20
We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used 'Implicit Monte Carlo' (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or 'Semi-Analog Monte Carlo' (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if {alpha}, the IMC time-discretization parameter, satisfies 0.5 < {alpha} {<=} 1. This is consistent with conventional wisdom. However, we also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.
CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.
Zahery, Mahsa; Maes, Hermine H; Neale, Michael C
2017-08-01
We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.
Systematic regularization of linear inverse solutions of the EEG source localization problem.
Phillips, Christophe; Rugg, Michael D; Fristont, Karl J
2002-09-01
Distributed linear solutions of the EEG source localization problem are used routinely. Here we describe an approach based on the weighted minimum norm method that imposes constraints using anatomical and physiological information derived from other imaging modalities to regularize the solution. In this approach the hyperparameters controlling the degree of regularization are estimated using restricted maximum likelihood (ReML). EEG data are always contaminated by noise, e.g., exogenous noise and background brain activity. The conditional expectation of the source distribution, given the data, is attained by carefully balancing the minimization of the residuals induced by noise and the improbability of the estimates as determined by their priors. This balance is specified by hyperparameters that control the relative importance of fitting and conforming to prior constraints. Here we introduce a systematic approach to this regularization problem, in the context of a linear observation model we have described previously. In this model, basis functions are extracted to reduce the solution space a priori in the spatial and temporal domains. The basis sets are motivated by knowledge of the evoked EEG response and information theory. In this paper we focus on an iterative "expectation-maximization" procedure to jointly estimate the conditional expectation of the source distribution and the ReML hyperparameters on which this solution rests. We used simulated data mixed with real EEG noise to explore the behavior of the approach with various source locations, priors, and noise levels. The results enabled us to conclude: (i) Solutions in the space of informed basis functions have a high face and construct validity, in relation to conventional analyses. (ii) The hyperparameters controlling the degree of regularization vary largely with source geometry and noise. The second conclusion speaks to the usefulness of using adaptative ReML hyperparameter estimates.
Horizons of description: Black holes and complementarity
NASA Astrophysics Data System (ADS)
Bokulich, Peter Joshua Martin
Niels Bohr famously argued that a consistent understanding of quantum mechanics requires a new epistemic framework, which he named complementarity . This position asserts that even in the context of quantum theory, classical concepts must be used to understand and communicate measurement results. The apparent conflict between certain classical descriptions is avoided by recognizing that their application now crucially depends on the measurement context. Recently it has been argued that a new form of complementarity can provide a solution to the so-called information loss paradox. Stephen Hawking argues that the evolution of black holes cannot be described by standard unitary quantum evolution, because such evolution always preserves information, while the evaporation of a black hole will imply that any information that fell into it is irrevocably lost---hence a "paradox." Some researchers in quantum gravity have argued that this paradox can be resolved if one interprets certain seemingly incompatible descriptions of events around black holes as instead being complementary. In this dissertation I assess the extent to which this black hole complementarity can be undergirded by Bohr's account of the limitations of classical concepts. I begin by offering an interpretation of Bohr's complementarity and the role that it plays in his philosophy of quantum theory. After clarifying the nature of classical concepts, I offer an account of the limitations these concepts face, and argue that Bohr's appeal to disturbance is best understood as referring to these conceptual limits. Following preparatory chapters on issues in quantum field theory and black hole mechanics, I offer an analysis of the information loss paradox and various responses to it. I consider the three most prominent accounts of black hole complementarity and argue that they fail to offer sufficient justification for the proposed incompatibility between descriptions. The lesson that emerges from this
NASA Technical Reports Server (NTRS)
Antoniewicz, Robert F.; Duke, Eugene L.; Menon, P. K. A.
1991-01-01
The design of nonlinear controllers has relied on the use of detailed aerodynamic and engine models that must be associated with the control law in the flight system implementation. Many of these controllers were applied to vehicle flight path control problems and have attempted to combine both inner- and outer-loop control functions in a single controller. An approach to the nonlinear trajectory control problem is presented. This approach uses linearizing transformations with measurement feedback to eliminate the need for detailed aircraft models in outer-loop control applications. By applying this approach and separating the inner-loop and outer-loop functions two things were achieved: (1) the need for incorporating detailed aerodynamic models in the controller is obviated; and (2) the controller is more easily incorporated into existing aircraft flight control systems. An implementation of the controller is discussed, and this controller is tested on a six degree-of-freedom F-15 simulation and in flight on an F-15 aircraft. Simulation data are presented which validates this approach over a large portion of the F-15 flight envelope. Proof of this concept is provided by flight-test data that closely matches simulation results. Flight-test data are also presented.
NASA Astrophysics Data System (ADS)
Yang, Bian-Xia; Sun, Hong-Rui; Feng, Zhaosheng
In this paper, we are concerned with the unilateral global bifurcation structure of fractional differential equation (‑Δ)αu(x) = λa(x)u(x) + F(x,u,λ),x ∈ Ω,u = 0,inℝN\\Ω with nondifferentiable nonlinearity F. It shows that there are two distinct unbounded subcontinua 𝒞+ and 𝒞‑ consisting of the continuum 𝒞 emanating from [λ1 ‑ d,λ1 + d] ×{0}, and two unbounded subcontinua 𝒟+ and 𝒟‑ consisting of the continuum 𝒟 emanating from [λ1 ‑d¯,λ1 + d¯] ×{∞}. As an application of this unilateral global bifurcation results, we present the existence of the principal half-eigenvalues of the half-linear fractional eigenvalue problem. Finally, we deal with the existence of constant sign solutions for a class of fractional nonlinear problems. Main results of this paper generalize the known results on classical Laplace operators to fractional Laplace operators.
Method of expanding hyperspheres - an interior algorithm for linear programming problems
Chandrupatla, T.
1994-12-31
A new interior algorithm using some properties of hyperspheres is proposed for the solution of linear programming problems with inequality constraints: maximize c{sup T} x subject to Ax {<=} b where c and rows of A are normalized in the Euclidean sense such that {parallel} c {parallel} = {radical}c{sup T}c = 1 {parallel} a{sub i} {parallel} {radical} A{sub i}A{sub i}{sup T} = 1 for i = 1 to m. The feasible region in the polytope bounded by the constraint planes. We start from an interior point. We pass a plane normal to c until it touches a constraint plane. Then the sphere is expanded so that it keeps contact with the previously touched planes and the expansion proceeds till it touches another plane. The procedure is continued till the sphere touches the c-plane and n constraint planes. We move to the center of the sphere and repeat the process. The interior maximum is reached when the radius of the expanded sphere is less than a critical value say {epsilon}. Problems of direction finding, determination of incoming constraint, sphere jamming, and evaluation of the initial feasible point are discussed.
Response of Non-Linear Shock Absorbers-Boundary Value Problem Analysis
NASA Astrophysics Data System (ADS)
Rahman, M. A.; Ahmed, U.; Uddin, M. S.
2013-08-01
A nonlinear boundary value problem of two degrees-of-freedom (DOF) untuned vibration damper systems using nonlinear springs and dampers has been numerically studied. As far as untuned damper is concerned, sixteen different combinations of linear and nonlinear springs and dampers have been comprehensively analyzed taking into account transient terms. For different cases, a comparative study is made for response versus time for different spring and damper types at three important frequency ratios: one at r = 1, one at r > 1 and one at r <1. The response of the system is changed because of the spring and damper nonlinearities; the change is different for different cases. Accordingly, an initially stable absorber may become unstable with time and vice versa. The analysis also shows that higher nonlinearity terms make the system more unstable. Numerical simulation includes transient vibrations. Although problems are much more complicated compared to those for a tuned absorber, a comparison of the results generated by the present numerical scheme with the exact one shows quite a reasonable agreement
Haider, M A; Guilak, F
2000-06-01
The micropipette aspiration test has been used extensively in recent years as a means of quantifying cellular mechanics and molecular interactions at the microscopic scale. However, previous studies have generally modeled the cell as an infinite half-space in order to develop an analytical solution for a viscoelastic solid cell. In this study, an axisymmetric boundary integral formulation of the governing equations of incompressible linear viscoelasticity is presented and used to simulate the micropipette aspiration contact problem. The cell is idealized as a homogeneous and isotropic continuum with constitutive equation given by three-parameter (E, tau 1, tau 2) standard linear viscoelasticity. The formulation is used to develop a computational model via a "correspondence principle" in which the solution is written as the sum of a homogeneous (elastic) part and a nonhomogeneous part, which depends only on past values of the solution. Via a time-marching scheme, the solution of the viscoelastic problem is obtained by employing an elastic boundary element method with modified boundary conditions. The accuracy and convergence of the time-marching scheme are verified using an analytical solution. An incremental reformulation of the scheme is presented to facilitate the simulation of micropipette aspiration, a nonlinear contact problem. In contrast to the halfspace model (Sato et al., 1990), this computational model accounts for nonlinearities in the cell response that result from a consideration of geometric factors including the finite cell dimension (radius R), curvature of the cell boundary, evolution of the cell-micropipette contact region, and curvature of the edges of the micropipette (inner radius a, edge curvature radius epsilon). Using 60 quadratic boundary elements, a micropipette aspiration creep test with ramp time t* = 0.1 s and ramp pressure p*/E = 0.8 is simulated for the cases a/R = 0.3, 0.4, 0.5 using mean parameter values for primary chondrocytes
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
An Eulerian method for multi-component problems in non-linear elasticity with sliding interfaces
NASA Astrophysics Data System (ADS)
Barton, Philip T.; Drikakis, Dimitris
2010-08-01
This paper is devoted to developing a multi-material numerical scheme for non-linear elastic solids, with emphasis on the inclusion of interfacial boundary conditions. In particular for colliding solid objects it is desirable to allow large deformations and relative slide, whilst employing fixed grids and maintaining sharp interfaces. Existing schemes utilising interface tracking methods such as volume-of-fluid typically introduce erroneous transport of tangential momentum across material boundaries. Aside from combatting these difficulties one can also make improvements in a numerical scheme for multiple compressible solids by utilising governing models that facilitate application of high-order shock capturing methods developed for hydrodynamics. A numerical scheme that simultaneously allows for sliding boundaries and utilises such high-order shock capturing methods has not yet been demonstrated. A scheme is proposed here that directly addresses these challenges by extending a ghost cell method for gas-dynamics to solid mechanics, by using a first-order model for elastic materials in conservative form. Interface interactions are captured using the solution of a multi-material Riemann problem which is derived in detail. Several different boundary conditions are considered including solid/solid and solid/vacuum contact problems. Interfaces are tracked using level-set functions. The underlying single material numerical method includes a characteristic based Riemann solver and high-order WENO reconstruction. Numerical solutions of example multi-material problems are provided in comparison to exact solutions for the one-dimensional augmented system, and for a two-dimensional friction experiment.
NASA Astrophysics Data System (ADS)
Zhou, Qinglong; Long, Yiming
2015-06-01
In this paper, we prove that the linearized system of elliptic triangle homographic solution of planar charged three-body problem can be transformed to that of the elliptic equilateral triangle solution of the planar classical three-body problem. Consequently, the results of Martínez, Samà and Simó (2006) [15] and results of Hu, Long and Sun (2014) [6] can be applied to these solutions of the charged three-body problem to get their linear stability.
Wu, Z; Zhang, Y
2008-01-01
The double digestion problem for DNA restriction mapping has been proved to be NP-complete and intractable if the numbers of the DNA fragments become large. Several approaches to the problem have been tested and proved to be effective only for small problems. In this paper, we formulate the problem as a mixed-integer linear program (MIP) by following (Waterman, 1995) in a slightly different form. With this formulation and using state-of-the-art integer programming techniques, we can solve randomly generated problems whose search space sizes are many-magnitude larger than previously reported testing sizes.
APPLICATION OF LINEAR PROGRAMMING TO FACILITY MAINTENANCE PROBLEMS IN THE NAVY SHORE ESTABLISHMENT.
LINEAR PROGRAMMING ), (*NAVAL SHORE FACILITIES, MAINTENANCE), (*MAINTENANCE, COSTS, MATHEMATICAL MODELS, MANAGEMENT PLANNING AND CONTROL, MANPOWER, FEASIBILITY STUDIES, OPTIMIZATION, MANAGEMENT ENGINEERING.
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
Song, Fengguang; Dongarra, Jack
2014-10-01
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasks without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
Song, Fengguang; Dongarra, Jack
2014-10-01
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
Evaluation of parallel direct sparse linear solvers in electromagnetic geophysical problems
NASA Astrophysics Data System (ADS)
Puzyrev, Vladimir; Koric, Seid; Wilkin, Scott
2016-04-01
High performance computing is absolutely necessary for large-scale geophysical simulations. In order to obtain a realistic image of a geologically complex area, industrial surveys collect vast amounts of data making the computational cost extremely high for the subsequent simulations. A major computational bottleneck of modeling and inversion algorithms is solving the large sparse systems of linear ill-conditioned equations in complex domains with multiple right hand sides. Recently, parallel direct solvers have been successfully applied to multi-source seismic and electromagnetic problems. These methods are robust and exhibit good performance, but often require large amounts of memory and have limited scalability. In this paper, we evaluate modern direct solvers on large-scale modeling examples that previously were considered unachievable with these methods. Performance and scalability tests utilizing up to 65,536 cores on the Blue Waters supercomputer clearly illustrate the robustness, efficiency and competitiveness of direct solvers compared to iterative techniques. Wide use of direct methods utilizing modern parallel architectures will allow modeling tools to accurately support multi-source surveys and 3D data acquisition geometries, thus promoting a more efficient use of the electromagnetic methods in geophysics.
Error Analysis Of Students Working About Word Problem Of Linear Program With NEA Procedure
NASA Astrophysics Data System (ADS)
Santoso, D. A.; Farid, A.; Ulum, B.
2017-06-01
Evaluation and assessment is an important part of learning. In evaluation process of learning, written test is still commonly used. However, the tests usually do not following-up by further evaluation. The process only up to grading stage not to evaluate the process and errors which done by students. Whereas if the student has a pattern error and process error, actions taken can be more focused on the fault and why is that happen. NEA procedure provides a way for educators to evaluate student progress more comprehensively. In this study, students’ mistakes in working on some word problem about linear programming have been analyzed. As a result, mistakes are often made students exist in the modeling phase (transformation) and process skills (process skill) with the overall percentage distribution respectively 20% and 15%. According to the observations, these errors occur most commonly due to lack of precision of students in modeling and in hastiness calculation. Error analysis with students on this matter, it is expected educators can determine or use the right way to solve it in the next lesson.
A Mixed Integer Linear Program for Solving a Multiple Route Taxi Scheduling Problem
NASA Technical Reports Server (NTRS)
Montoya, Justin Vincent; Wood, Zachary Paul; Rathinam, Sivakumar; Malik, Waqar Ahmad
2010-01-01
Aircraft movements on taxiways at busy airports often create bottlenecks. This paper introduces a mixed integer linear program to solve a Multiple Route Aircraft Taxi Scheduling Problem. The outputs of the model are in the form of optimal taxi schedules, which include routing decisions for taxiing aircraft. The model extends an existing single route formulation to include routing decisions. An efficient comparison framework compares the multi-route formulation and the single route formulation. The multi-route model is exercised for east side airport surface traffic at Dallas/Fort Worth International Airport to determine if any arrival taxi time savings can be achieved by allowing arrivals to have two taxi routes: a route that crosses an active departure runway and a perimeter route that avoids the crossing. Results indicate that the multi-route formulation yields reduced arrival taxi times over the single route formulation only when a perimeter taxiway is used. In conditions where the departure aircraft are given an optimal and fixed takeoff sequence, accumulative arrival taxi time savings in the multi-route formulation can be as high as 3.6 hours more than the single route formulation. If the departure sequence is not optimal, the multi-route formulation results in less taxi time savings made over the single route formulation, but the average arrival taxi time is significantly decreased.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
1998-01-01
Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.
NASA Astrophysics Data System (ADS)
Siddheshwar, P. G.; Mahabaleswar, U. S.; Andersson, H. I.
2013-08-01
The paper discusses a new analytical procedure for solving the non-linear boundary layer equation arising in a linear stretching sheet problem involving a Newtonian/non-Newtonian liquid. On using a technique akin to perturbation the problem gives rise to a system of non-linear governing differential equations that are solved exactly. An analytical expression is obtained for the stream function and velocity as a function of the stretching parameters. The Clairaut equation is obtained on consideration of consistency and its solution is shown to be that of the stretching sheet boundary layer equation. The present study throws light on the analytical solution of a class of boundary layer equations arising in the stretching sheet problem
The Limits of Black Hole Complementarity
NASA Astrophysics Data System (ADS)
Susskind, Leonard
Black hole complementarity, as originally formulated in the 1990's by Preskill, 't Hooft, and myself is now being challenged by the Almheiri-Marolf-Polchinski-Sully firewall argument. The AMPS argument relies on an implicit assumption—the "proximity" postulate—which says that the interior of a black hole must be constructed from degrees of freedom that are physically near the black hole. The proximity postulate manifestly contradicts the idea that interior information is redundant with information in Hawking radiation, which is very far from the black hole. AMPS argue that a violation of the proximity postulate would lead to a contradiction in a thought-experiment in which Alice distills the Hawking radiation and brings a bit back to the black hole. According to AMPS the only way to protect against the contradiction is for a firewall to form at the Page time. But the measurement that Alice must make, is of such a fine-grained nature that carrying it out before the black hole evaporates may be impossible. Harlow and Hayden have found evidence that the limits of quantum computation do in fact prevent Alice from carrying out her experiment in less than exponential time. If their conjecture is correct then black hole complementarity may be alive and well. My aim here is to give an overview of the firewall argument, and its basis in the proximity postulate; as well as the counterargument based on computational complexity, as conjectured by Harlow and Hayden.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Li, Jun
2002-09-01
In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.
Chabini, I.; Florian, M.
1994-12-31
In this paper we present a new class of sequential and parallel algorithms for transportation problems with linear and convex costs. First, we consider a capacitated transportation problem with an entropy type objective function. We show that this problem has some interesting properties, namely that its optimal solution verifies both the non negativity and capacity constraints. Then, we give a new solution method for this problem. The algorithm consists of a sequence of {open_quotes}balancing{close_quotes} iterations on the conservation of flow constraints which may be viewed as a generalization of the well known RAS algorithm for matrix balancing. Then we prove the convergence of this method and extend it to strictly convex and linear cost transportation problems. For differentiable convex costs we develop an adaptation where each projection is an entropy type capacitated transportation problem. For linear costs, we prove a triple equivalence between the entropy projection method, the proximal minimization approach (with our entropy type function) and an entropy barrier method. We give a convergence rate analysis for strongly convex costs and linear objectif functions. We show efficient implementations on both serial and parallel environments. Computational results indicate that this methods yields very encouraging results. We solve large problems with several million variables on a network of transputers and Sun workstations. For the linear case, the serial implementation is compared to some network simplex codes like RELAX and RNET. Computational experiments indicate that this algorithm can outperform both RELAX and RNET. The parallel implementations are analysed using especially a new measure of performance developed by the authors. The results demonstrate that this measure can give more information than the classical measure of speedup. Some unexpected behaviors are reported.
Chabini, I.; Florian, M.
1994-12-31
In this paper we present a new class of sequential and parallel algorithms for transportation problems with linear and convex costs. First, we consider a capacitated transportation problem with an entropy type objectif function. We show that this problem has some interesting properties, namely that its optimal solution verifies both the non negativity and capacity constraints. Then, we give a new solution method for this problem. The algorithm consists of a sequence of {open_quotes}balancing{close_quotes} iterations on the conservation of flow constraints which may be viewed as a generalization of the well known RAS algorithm for matrix balancing. Then we prove the convergence of this method and extend it to strictly convex and linear cost transportation problems. For differentiable convex costs we develop an adaptation where each projection is an entropy type capacitated transportation problem. For linear costs, we prove a triple equivalence between the entropy projection method, the proximal minimization approach (with our entropy type function) and an entropy barrier method. We give a convergence rate analysis for strongly convex costs and linear objectif functions. We show efficient implementations on both serial and parallel environments. Computational results indicate that this methods yields very encouraging results. We solve large problems with several million variables on a network of transputers and Sun workstations. For the linear case, the serial implementation is compared to some network simplex codes like RELAX and RNET. Computational experiments indicate that this algorithm can outperform both RELAX and RNET. The parallel implementations are analysed using especially a new measure of performance developed by the authors. The results demonstrate that this measure can give more information than the classical measure of speedup. Some unexpected behaviors are reported.
NASA Astrophysics Data System (ADS)
Ebrahimnejad, Ali
2015-08-01
There are several methods, in the literature, for solving fuzzy variable linear programming problems (fuzzy linear programming in which the right-hand-side vectors and decision variables are represented by trapezoidal fuzzy numbers). In this paper, the shortcomings of some existing methods are pointed out and to overcome these shortcomings a new method based on the bounded dual simplex method is proposed to determine the fuzzy optimal solution of that kind of fuzzy variable linear programming problems in which some or all variables are restricted to lie within lower and upper bounds. To illustrate the proposed method, an application example is solved and the obtained results are given. The advantages of the proposed method over existing methods are discussed. Also, one application of this algorithm in solving bounded transportation problems with fuzzy supplies and demands is dealt with. The proposed method is easy to understand and to apply for determining the fuzzy optimal solution of bounded fuzzy variable linear programming problems occurring in real-life situations.
NASA Astrophysics Data System (ADS)
Heinkenschloss, Matthias
2005-01-01
We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.
The late Universe with non-linear interaction in the dark sector: The coincidence problem
NASA Astrophysics Data System (ADS)
Bouhmadi-López, Mariam; Morais, João; Zhuk, Alexander
2016-12-01
We study the Universe at the late stage of its evolution and deep inside the cell of uniformity. At such a scale the Universe is highly inhomogeneous and filled with discretely distributed inhomogeneities in the form of galaxies and groups of galaxies. As a matter source, we consider dark matter (DM) and dark energy (DE) with a non-linear interaction Q = 3 HγεbarDEεbarDM /(εbarDE +εbarDM) , where γ is a constant. We assume that DM is pressureless and DE has a constant equation of state parameter w. In the considered model, the energy densities of the dark sector components present a scaling behaviour with εbarDM /εbarDE ∼(a0 / a) - 3(w + γ). We investigate the possibility that the perturbations of DM and DE, which are interacting among themselves, could be coupled to the galaxies with the former being concentrated around them. To carry our analysis, we consider the theory of scalar perturbations (within the mechanical approach), and obtain the sets of parameters (w , γ) which do not contradict it. We conclude that two sets: (w = - 2 / 3 , γ = 1 / 3) and (w = - 1 , γ = 1 / 3) are of special interest. First, the energy densities of DM and DE on these cases are concentrated around galaxies confirming that they are coupled fluids. Second, we show that for both of them, the coincidence problem is less severe than in the standard ΛCDM. Third, the set (w = - 1 , γ = 1 / 3) is within the observational constraints. Finally, we also obtain an expression for the gravitational potential in the considered model.
Zainudin, Suhaila; Arif, Shereena M.
2017-01-01
Gene regulatory network (GRN) reconstruction is the process of identifying regulatory gene interactions from experimental data through computational analysis. One of the main reasons for the reduced performance of previous GRN methods had been inaccurate prediction of cascade motifs. Cascade error is defined as the wrong prediction of cascade motifs, where an indirect interaction is misinterpreted as a direct interaction. Despite the active research on various GRN prediction methods, the discussion on specific methods to solve problems related to cascade errors is still lacking. In fact, the experiments conducted by the past studies were not specifically geared towards proving the ability of GRN prediction methods in avoiding the occurrences of cascade errors. Hence, this research aims to propose Multiple Linear Regression (MLR) to infer GRN from gene expression data and to avoid wrongly inferring of an indirect interaction (A → B → C) as a direct interaction (A → C). Since the number of observations of the real experiment datasets was far less than the number of predictors, some predictors were eliminated by extracting the random subnetworks from global interaction networks via an established extraction method. In addition, the experiment was extended to assess the effectiveness of MLR in dealing with cascade error by using a novel experimental procedure that had been proposed in this work. The experiment revealed that the number of cascade errors had been very minimal. Apart from that, the Belsley collinearity test proved that multicollinearity did affect the datasets used in this experiment greatly. All the tested subnetworks obtained satisfactory results, with AUROC values above 0.5. PMID:28250767
Anastassi, Z. A.; Simos, T. E.
2010-09-30
We develop a new family of explicit symmetric linear multistep methods for the efficient numerical solution of the Schroedinger equation and related problems with oscillatory solution. The new methods are trigonometrically fitted and have improved intervals of periodicity as compared to the corresponding classical method with constant coefficients and other methods from the literature. We also apply the methods along with other known methods to real periodic problems, in order to measure their efficiency.
Gender differences in interpersonal complementarity within roommate dyads.
Ansell, Emily B; Kurtz, John E; Markey, Patrick M
2008-04-01
Complementarity theory proposes specific hypotheses regarding interpersonal styles that will result in successful relationships. The present study sought to extend previous research on gender differences in complementarity through the examination of same-sex peer dyads and the use of informant reports of interpersonal style. One hundred twenty participants (30 male and 30 female roommate dyads) completed interpersonal circumplex ratings of their roommates and a relationship cohesion measure. Examinations of complementarity indicate that women reported significantly more complementarity than men within their roommate dyads. However, for men and women, the closer the dyad was to perfect complementarity in terms of dominance, the more cohesive the relationship. Results are discussed in relation to gender differences in social development.
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
NASA Astrophysics Data System (ADS)
Rozhdestvenskaya, Ekaterina A.
2011-02-01
The existence of a solution of the Dirichlet problem for a second order elliptic equation with non-linear part discontinuous in the phase variable is proved in the cases of resonance on the left and resonance on the right of the first eigenvalue of the differential operator in the situation where the Landesman-Lazer conditions do not hold.
NASA Astrophysics Data System (ADS)
Renac, Florent
2011-06-01
An algorithm for stabilizing linear iterative schemes is developed in this study. The recursive projection method is applied in order to stabilize divergent numerical algorithms. A criterion for selecting the divergent subspace of the iteration matrix with an approximate eigenvalue problem is introduced. The performance of the present algorithm is investigated in terms of storage requirements and CPU costs and is compared to the original Krylov criterion. Theoretical results on the divergent subspace selection accuracy are established. The method is then applied to the resolution of the linear advection-diffusion equation and to a sensitivity analysis for a turbulent transonic flow in the context of aerodynamic shape optimization. Numerical experiments demonstrate better robustness and faster convergence properties of the stabilization algorithm with the new criterion based on the approximate eigenvalue problem. This criterion requires only slight additional operations and memory which vanish in the limit of large linear systems.
Dark matter complementarity and the Z' portal
NASA Astrophysics Data System (ADS)
Alves, Alexandre; Berlin, Asher; Profumo, Stefano; Queiroz, Farinaldo S.
2015-10-01
Z' gauge bosons arise in many particle physics models as mediators between the dark and visible sectors. We exploit dark matter (DM) complementarity and derive stringent and robust collider, direct and indirect constraints, as well as limits from the muon magnetic moment. We rule out almost the entire region of the parameter space that yields the right dark matter thermal relic abundance, using a generic parametrization of the Z'-fermion couplings normalized to the standard model Z-fermion couplings for dark matter masses in the 8 GeV-5 TeV range. We conclude that mediators lighter than 2.1 TeV are excluded regardless of the DM mass, and that depending on the Z'-fermion coupling strength much heavier masses are needed to reproduce the DM thermal relic abundance while avoiding existing limits.
Horizon complementarity in elliptic de Sitter space
NASA Astrophysics Data System (ADS)
Hackl, Lucas; Neiman, Yasha
2015-02-01
We study a quantum field in elliptic de Sitter space dS4/Z2—the spacetime obtained from identifying antipodal points in dS4. We find that the operator algebra and Hilbert space cannot be defined for the entire space, but only for observable causal patches. This makes the system into an explicit realization of the horizon complementarity principle. In the absence of a global quantum theory, we propose a recipe for translating operators and states between observers. This translation involves information loss, in accordance with the fact that two observers see different patches of the spacetime. As a check, we recover the thermal state at the de Sitter temperature as a state that appears the same to all observers. This thermal state arises from the same functional that, in ordinary dS4, describes the Bunch-Davies vacuum.
Black hole complementarity in gravity's rainbow
Gim, Yongwan; Kim, Wontae E-mail: wtkim@sogang.ac.kr
2015-05-01
To see how the gravity's rainbow works for black hole complementary, we evaluate the required energy for duplication of information in the context of black hole complementarity by calculating the critical value of the rainbow parameter in the certain class of the rainbow Schwarzschild black hole. The resultant energy can be written as the well-defined limit for the vanishing rainbow parameter which characterizes the deformation of the relativistic dispersion relation in the freely falling frame. It shows that the duplication of information in quantum mechanics could not be allowed below a certain critical value of the rainbow parameter; however, it might be possible above the critical value of the rainbow parameter, so that the consistent formulation in our model requires additional constraints or any other resolutions for the latter case.
Růzek, Michal; Sedlák, Petr; Seiner, Hanus; Kruisová, Alena; Landa, Michal
2010-12-01
In this paper, linearized approximations of both the forward and the inverse problems of resonant ultrasound spectroscopy for the determination of mechanical properties of thin surface layers are presented. The linear relations between the frequency shifts induced by the deposition of the layer and the in-plane elastic coefficients of the layer are derived and inverted, the applicability range of the obtained linear model is discussed by a comparison with nonlinear models and finite element method (FEM), and an algorithm for the estimation of experimental errors in the inversely determined elastic coefficients is described. In the final part of the paper, the linearized inverse procedure is applied to evaluate elastic coefficients of a 310 nm thick diamond-like carbon layer deposited on a silicon substrate.
NASA Astrophysics Data System (ADS)
Kumar, Ratesh; Kaur, Harpreet; Arora, Geeta
2017-07-01
In this paper, Haar wavelet collocation mechanism (HWCM) is developed for obtaining the solution of higher order linear and nonlinear boundary value problems. Mechanism is based on approximation of solution by Haar wavelet family. To tackle the nonlinearity in the problems, Quasilinearization technique is applied. Many examples are considered to prove the successful application of the mechanism developed for getting the highly accurate result. By using the HWCM, an approximate solution for higher order boundary value problems (HOBVPs) are obtained and compared with exact and numerical solutions available in the literature.
Beklaryan, Leva A
2011-03-31
A boundary value problem and an initial-boundary value problems are considered for a linear functional differential equation of point type. A suitable scale of functional spaces is introduced and existence theorems for solutions are stated in terms of this scale, in a form analogous to Noether's theorem. A key fact is established for the initial boundary value problem: the space of classical solutions of the adjoint equation must be extended to include impulsive solutions. A test for the pointwise completeness of solutions is obtained. The results presented are based on a formalism developed by the author for this type of equation. Bibliography: 7 titles.
NASA Technical Reports Server (NTRS)
Halyo, N.; Caglayan, A. K.
1976-01-01
This paper considers the control of a continuous linear plant disturbed by white plant noise when the control is constrained to be a piecewise constant function of time; i.e. a stochastic sampled-data system. The cost function is the integral of quadratic error terms in the state and control, thus penalizing errors at every instant of time while the plant noise disturbs the system continuously. The problem is solved by reducing the constrained continuous problem to an unconstrained discrete one. It is shown that the separation principle for estimation and control still holds for this problem when the plant disturbance and measurement noise are Gaussian.
NASA Astrophysics Data System (ADS)
Hellmich, Ch.; Ulm, F.-J.; Mang, H. A.
In this work, after a short review of the respective thermodynamic formulation, the algorithmic treatment of coupled chemo-thermal problems with exo- or endothermal reactions is addressed. The Finite Element Method (FEM) is serving as the analysis tool. Consistent linearization of the discretized evolution equations results in quadratic convergence of the global Newton-Raphson equilibrium iteration. This renders solutions of practical engineering problems feasible. The range of these problems encompasses the early age behavior of concrete as well as agricultural applications. In order to demonstrate the applicability of the presented material law, a 3D material test for shotcrete is re-analyzed.
ERIC Educational Resources Information Center
Sole, Marla A.
2016-01-01
Open-ended questions that can be solved using different strategies help students learn and integrate content, and provide teachers with greater insights into students' unique capabilities and levels of understanding. This article provides a problem that was modified to allow for multiple approaches. Students tended to employ high-powered, complex,…
ERIC Educational Resources Information Center
Sole, Marla A.
2016-01-01
Open-ended questions that can be solved using different strategies help students learn and integrate content, and provide teachers with greater insights into students' unique capabilities and levels of understanding. This article provides a problem that was modified to allow for multiple approaches. Students tended to employ high-powered, complex,…
NASA Astrophysics Data System (ADS)
Singh, Prince; Sharma, Dinkar
2017-07-01
Series solution is obtained on solving non-linear fractional partial differential equation using homotopy perturbation transformation method. First of all, we apply homotopy perturbation transformation method to obtain the series solution of non-linear fractional partial differential equation. In this case, the fractional derivative is described in Caputo sense. Then, we present the facts obtained by analyzing the convergence of this series solution. Finally, the established fact is supported by an example.
A Novel Numerical Algorithm of Numerov Type for 2D Quasi-linear Elliptic Boundary Value Problems
NASA Astrophysics Data System (ADS)
Mohanty, R. K.; Kumar, Ravindra
2014-11-01
In this article, using three function evaluations, we discuss a nine-point compact scheme of O(Δ y2 + Δ x4) based on Numerov-type discretization for the solution of 2D quasi-linear elliptic equations with given Dirichlet boundary conditions, where Δy > 0 and Δx > 0 are grid sizes in y- and x-directions, respectively. Iterative methods for diffusion-convection equation are discussed in detail. We use block iterative methods to solve the system of algebraic linear and nonlinear difference equations. Comparative results of some physical problems are given to illustrate the usefulness of the proposed method.
NASA Astrophysics Data System (ADS)
Ishwar, B.; Sharma, J. P.
2012-02-01
We have discussed non-linear stability in photogravitational non-planar restricted three body problem with oblate smaller primary. By photogravitational we mean that both primaries are radiating. We normalized the Hamiltonian using Lie transform as in Coppola and Rand (Celest. Mech. 45:103, 1989). We transformed the system into Birkhoff's normal form. Lie transforms reduce the system to an equivalent simpler system which is immediately solvable. Applying Arnold's theorem, we have found non-linear stability criteria. We conclude that L 6 is stable. We plotted graphs for ( ω 1, D 2). They are rectangular hyperbola.
NASA Astrophysics Data System (ADS)
Rosenberg, D. E.; Alafifi, A.
2016-12-01
Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
Influence of geometrical parameters on the linear stability of a Bénard-Marangoni problem
NASA Astrophysics Data System (ADS)
Hoyas, S.; Fajardo, P.; Pérez-Quiles, M. J.
2016-04-01
A linear stability analysis of a thin liquid film flowing over a plate is performed. The analysis is performed in an annular domain when momentum diffusivity and thermal diffusivity are comparable (relatively low Prandtl number, Pr =1.2 ). The influence of the aspect ratio (Γ ) and gravity, through the Bond number (Bo ), in the linear stability of the flow are analyzed together. Two different regions in the Γ -Bo plane have been identified. In the first one the basic state presents a linear regime (in which the temperature gradient does not change sign with r ). In the second one, the flow presents a nonlinear regime, also called return flow. A great diversity of bifurcations have been found just by changing the domain depth d . The results obtained in this work are in agreement with some reported experiments, and give a deeper insight into the effect of physical parameters on bifurcations.
Influence of geometrical parameters on the linear stability of a Bénard-Marangoni problem.
Hoyas, S; Fajardo, P; Pérez-Quiles, M J
2016-04-01
A linear stability analysis of a thin liquid film flowing over a plate is performed. The analysis is performed in an annular domain when momentum diffusivity and thermal diffusivity are comparable (relatively low Prandtl number, Pr=1.2). The influence of the aspect ratio (Γ) and gravity, through the Bond number (Bo), in the linear stability of the flow are analyzed together. Two different regions in the Γ-Bo plane have been identified. In the first one the basic state presents a linear regime (in which the temperature gradient does not change sign with r). In the second one, the flow presents a nonlinear regime, also called return flow. A great diversity of bifurcations have been found just by changing the domain depth d. The results obtained in this work are in agreement with some reported experiments, and give a deeper insight into the effect of physical parameters on bifurcations.
Generalized Quasi-Variational Inequality and Implicit Complementarity Problems
1989-10-01
of topological spaces and continuous maps. Hence A is a retract of X if and only if there is a continuous map r : X - A such that r(x) = x,V x E A...Programming , Wiley, New York, 1979. [3] C. Berge, Topological Spaces , MacMillan Co. New York, 1963. [4] Y. M. Bershchanskii and M. V. Meerov, "The
Complementarity and entanglement in quantum information theory
NASA Astrophysics Data System (ADS)
Tessier, Tracey Edward
This research investigates two inherently quantum mechanical phenomena, namely complementarity and entanglement, from an information-theoretic perspective. Beyond philosophical implications, a thorough grasp of these concepts is crucial for advancing our understanding of foundational issues in quantum mechanics, as well as in studying how the use of quantum systems might enhance the performance of certain information processing tasks. The primary goal of this thesis is to shed light on the natures and interrelationships of these phenomena by approaching them from the point of view afforded by information theory. We attempt to better understand these pillars of quantum mechanics by studying the various ways in which they govern the manipulation of information, while at the same time gaining valuable insight into the roles they play in specific applications. The restrictions that nature places on the distribution of correlations in a multipartite quantum system play fundamental roles in the evolution of such systems and yield vital insights into the design of protocols for the quantum control of ensembles with potential applications in the field of quantum computing. By augmenting the existing formalism for quantifying entangled correlations, we show how this entanglement sharing behavior may be studied in increasingly complex systems of both theoretical and experimental significance. Further, our results shed light on the dynamical generation and evolution of multipartite entanglement by demonstrating that individual members of an ensemble of identical systems coupled to a common probe can become entangled with one another, even when they do not interact directly. The findings presented in this thesis support the conjecture that Hilbert space dimension is an objective property of a quantum system since it constrains the number of valid conceptual divisions of the system into subsystems. These arbitrary observer-induced distinctions are integral to the theory since
General theory of spherically symmetric boundary-value problems of the linear transport theory.
NASA Technical Reports Server (NTRS)
Kanal, M.
1972-01-01
A general theory of spherically symmetric boundary-value problems of the one-speed neutron transport theory is presented. The formulation is also applicable to the 'gray' problems of radiative transfer. The Green's function for the purely absorbing medium is utilized in obtaining the normal mode expansion of the angular densities for both interior and exterior problems. As the integral equations for unknown coefficients are regular, a general class of reduction operators is introduced to reduce such regular integral equations to singular ones with a Cauchy-type kernel. Such operators then permit one to solve the singular integral equations by the standard techniques due to Muskhelishvili. We discuss several spherically symmetric problems. However, the treatment is kept sufficiently general to deal with problems lacking azimuthal symmetry. In particular the procedure seems to work for regions whose boundary coincides with one of the coordinate surfaces for which the Helmholtz equation is separable.
NASA Technical Reports Server (NTRS)
Sain, M. K.; Antsaklis, P. J.; Gejji, R. R.; Wyman, B. F.; Peczkowski, J. L.
1981-01-01
Zames (1981) has observed that there is, in general, no 'separation principle' to guarantee optimality of a division between control law design and filtering of plant uncertainty. Peczkowski and Sain (1978) have solved a model matching problem using transfer functions. Taking into consideration this investigation, Peczkowski et al. (1979) proposed the Total Synthesis Problem (TSP), wherein both the command/output-response and command/control-response are to be synthesized, subject to the plant constraint. The TSP concept can be subdivided into a Nominal Design Problem (NDP), which is not dependent upon specific controller structures, and a Feedback Synthesis Problem (FSP), which is. Gejji (1980) found that NDP was characterized in terms of the plant structural matrices and a single, 'good' transfer function matrix. Sain et al. (1981) have extended this NDP work. The present investigation is concerned with a study of FSP for the unity feedback case. NDP, together with feedback synthesis, is understood as a Total Synthesis Problem.
NASA Technical Reports Server (NTRS)
Sain, M. K.; Antsaklis, P. J.; Gejji, R. R.; Wyman, B. F.; Peczkowski, J. L.
1981-01-01
Zames (1981) has observed that there is, in general, no 'separation principle' to guarantee optimality of a division between control law design and filtering of plant uncertainty. Peczkowski and Sain (1978) have solved a model matching problem using transfer functions. Taking into consideration this investigation, Peczkowski et al. (1979) proposed the Total Synthesis Problem (TSP), wherein both the command/output-response and command/control-response are to be synthesized, subject to the plant constraint. The TSP concept can be subdivided into a Nominal Design Problem (NDP), which is not dependent upon specific controller structures, and a Feedback Synthesis Problem (FSP), which is. Gejji (1980) found that NDP was characterized in terms of the plant structural matrices and a single, 'good' transfer function matrix. Sain et al. (1981) have extended this NDP work. The present investigation is concerned with a study of FSP for the unity feedback case. NDP, together with feedback synthesis, is understood as a Total Synthesis Problem.
General theory of spherically symmetric boundary-value problems of the linear transport theory.
NASA Technical Reports Server (NTRS)
Kanal, M.
1972-01-01
A general theory of spherically symmetric boundary-value problems of the one-speed neutron transport theory is presented. The formulation is also applicable to the 'gray' problems of radiative transfer. The Green's function for the purely absorbing medium is utilized in obtaining the normal mode expansion of the angular densities for both interior and exterior problems. As the integral equations for unknown coefficients are regular, a general class of reduction operators is introduced to reduce such regular integral equations to singular ones with a Cauchy-type kernel. Such operators then permit one to solve the singular integral equations by the standard techniques due to Muskhelishvili. We discuss several spherically symmetric problems. However, the treatment is kept sufficiently general to deal with problems lacking azimuthal symmetry. In particular the procedure seems to work for regions whose boundary coincides with one of the coordinate surfaces for which the Helmholtz equation is separable.
Constructive Processes in Linear Order Problems Revealed by Sentence Study Times
ERIC Educational Resources Information Center
Mynatt, Barbee T.; Smith, Kirk H.
1977-01-01
This research was a further test of the theory of constructive processes proposed by Foos, Smith, Sabol, and Mynatt (1976) to account for differences among presentation orders in the construction of linear orders. This theory is composed of different series of mental operations that must be performed when an order relationship is integrated with…
Coercive solvability of two-interval Sturm-Liouville problems with abstract linear operator
NASA Astrophysics Data System (ADS)
Aydemir, Kadriye; Olǧar, Hayati
2017-04-01
In this paper we focus our attention on a new type nonhomogeneous Sturm-Liouville systems with abstract linear operator contained in the equation. A different approach is used here for investigation such important properties as topological isomorphism and coercive solvability. Moreover we prove that the corresponding resolvent operator is compact in a suitable Hilbert space.
NASA Technical Reports Server (NTRS)
Kleinman, D. L.
1976-01-01
A numerical technique is given for solving the matrix quadratic equation that arises in the optimal stationary control of linear systems with state (and/or control) dependent noise. The technique exploits fully existing, efficient algorithms for the matrix Lyapunov and Ricatti equations. The computational requirements are discussed, with an associated example.
NASA Technical Reports Server (NTRS)
Rosen, I. G.; Wang, C.
1992-01-01
The convergence of solutions to the discrete- or sampled-time linear quadratic regulator problem and associated Riccati equation for infinite-dimensional systems to the solutions to the corresponding continuous time problem and equation, as the length of the sampling interval (the sampling rate) tends toward zero(infinity) is established. Both the finite-and infinite-time horizon problems are studied. In the finite-time horizon case, strong continuity of the operators that define the control system and performance index, together with a stability and consistency condition on the sampling scheme are required. For the infinite-time horizon problem, in addition, the sampled systems must be stabilizable and detectable, uniformly with respect to the sampling rate. Classes of systems for which this condition can be verified are discussed. Results of numerical studies involving the control of a heat/diffusion equation, a hereditary or delay system, and a flexible beam are presented and discussed.
NASA Technical Reports Server (NTRS)
Rosen, I. G.; Wang, C.
1990-01-01
The convergence of solutions to the discrete or sampled time linear quadratic regulator problem and associated Riccati equation for infinite dimensional systems to the solutions to the corresponding continuous time problem and equation, as the length of the sampling interval (the sampling rate) tends toward zero (infinity) is established. Both the finite and infinite time horizon problems are studied. In the finite time horizon case, strong continuity of the operators which define the control system and performance index together with a stability and consistency condition on the sampling scheme are required. For the infinite time horizon problem, in addition, the sampled systems must be stabilizable and detectable, uniformly with respect to the sampling rate. Classes of systems for which this condition can be verified are discussed. Results of numerical studies involving the control of a heat/diffusion equation, a hereditary of delay system, and a flexible beam are presented and discussed.
NASA Astrophysics Data System (ADS)
Noor-E-Alam, Md.; Doucette, John
2015-08-01
Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.
NASA Astrophysics Data System (ADS)
Maksimyuk, V. A.; Storozhuk, E. A.; Chernyshenko, I. S.
2012-11-01
Variational finite-difference methods of solving linear and nonlinear problems for thin and nonthin shells (plates) made of homogeneous isotropic (metallic) and orthotropic (composite) materials are analyzed and their classification principles and structure are discussed. Scalar and vector variational finite-difference methods that implement the Kirchhoff-Love hypotheses analytically or algorithmically using Lagrange multipliers are outlined. The Timoshenko hypotheses are implemented in a traditional way, i.e., analytically. The stress-strain state of metallic and composite shells of complex geometry is analyzed numerically. The numerical results are presented in the form of graphs and tables and used to assess the efficiency of using the variational finite-difference methods to solve linear and nonlinear problems of the statics of shells (plates)
NASA Technical Reports Server (NTRS)
Friedmann, P.; Hammond, C. E.; Woo, T.-H.
1977-01-01
Two efficient numerical methods for dealing with the stability of linear periodic systems are presented. Both methods combine the use of multivariable Floquet-Liapunov theory with an efficient numerical scheme for computing the transition matrix at the end of one period. The numerical properties of these methods are illustrated by applying them to the simple parametric excitation problem of a fixed end column. The practical value of these methods is shown by applying them to some helicopter rotor blade aeroelastic and structural dynamics problems. It is concluded that these methods are numerically efficient, general and practical for dealing with the stability of large periodic systems.
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1978-01-01
The formulation of the classical Linear-Quadratic-Gaussian stochastic control problem as employed in low thrust navigation analysis is reviewed. A reformulation is then presented which eliminates a potentially unreliable matrix subtraction in the control calculations, improves the computational efficiency, and provides for a cleaner computational interface between the estimation and control processes. Lastly, the application of the U-D factorization method to the reformulated equations is examined with the objective of achieving a complete set of factored equations for the joint estimation and control problem.
NASA Astrophysics Data System (ADS)
Moryakov, A. V.
2016-12-01
An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.
Moryakov, A. V.
2016-12-15
An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.
A discussion of a homogenization procedure for a degenerate linear hyperbolic-parabolic problem
NASA Astrophysics Data System (ADS)
Flodén, L.; Holmbom, A.; Jonasson, P.; Lobkova, T.; Lindberg, M. Olsson; Zhang, Y.
2017-01-01
We study the homogenization of a hyperbolic-parabolic PDE with oscillations in one fast spatial scale. Moreover, the first order time derivative has a degenerate coefficient passing to infinity when ɛ→0. We obtain a local problem which is of elliptic type, while the homogenized problem is also in some sense an elliptic problem but with the limit for ɛ-1∂tuɛ as an undetermined extra source term in the right-hand side. The results are somewhat surprising and work remains to obtain a fully rigorous treatment. Hence the last section is devoted to a discussion of the reasonability of our conjecture including numerical experiments.
Complementarity among natural enemies enhances pest suppression.
Dainese, Matteo; Schneider, Gudrun; Krauss, Jochen; Steffan-Dewenter, Ingolf
2017-08-15
Natural enemies have been shown to be effective agents for controlling insect pests in crops. However, it remains unclear how different natural enemy guilds contribute to the regulation of pests and how this might be modulated by landscape context. In a field exclusion experiment in oilseed rape (OSR), we found that parasitoids and ground-dwelling predators acted in a complementary way to suppress pollen beetles, suggesting that pest control by multiple enemies attacking a pest during different periods of its occurrence in the field improves biological control efficacy. The density of pollen beetle significantly decreased with an increased proportion of non-crop habitats in the landscape. Parasitism had a strong effect on pollen beetle numbers in landscapes with a low or intermediate proportion of non-crop habitats, but not in complex landscapes. Our results underline the importance of different natural enemy guilds to pest regulation in crops, and demonstrate how biological control can be strengthened by complementarity among natural enemies. The optimization of natural pest control by adoption of specific management practices at local and landscape scales, such as establishing non-crop areas, low-impact tillage, and temporal crop rotation, could significantly reduce dependence on pesticides and foster yield stability through ecological intensification in agriculture.
A holographic model for black hole complementarity
NASA Astrophysics Data System (ADS)
Lowe, David A.; Thorlacius, Larus
2016-12-01
We explore a version of black hole complementarity, where an approximate semiclassical effective field theory for interior infalling degrees of freedom emerges holo-graphically from an exact evolution of exterior degrees of freedom. The infalling degrees of freedom have a complementary description in terms of outgoing Hawking radiation and must eventually decohere with respect to the exterior Hamiltonian, leading to a breakdown of the semiclassical description for an infaller. Trace distance is used to quantify the difference between the complementary time evolutions, and to define a decoherence time. We propose a dictionary where the evolution with respect to the bulk effective Hamiltonian corresponds to mean field evolution in the holographic theory. In a particular model for the holographic theory, which exhibits fast scrambling, the decoherence time coincides with the scrambling time. The results support the hypothesis that decoherence of the infalling holographic state and disruptive bulk effects near the curvature singularity are comple-mentary descriptions of the same physics, which is an important step toward resolving the black hole information paradox.
A holographic model for black hole complementarity
Lowe, David A.; Thorlacius, Larus
2016-12-07
Here, we explore a version of black hole complementarity, where an approximate semiclassical effective field theory for interior infalling degrees of freedom emerges holo-graphically from an exact evolution of exterior degrees of freedom. The infalling degrees of freedom have a complementary description in terms of outgoing Hawking radiation and must eventually decohere with respect to the exterior Hamiltonian, leading to a breakdown of the semiclassical description for an infaller. Trace distance is used to quantify the difference between the complementary time evolutions, and to define a decoherence time. We propose a dictionary where the evolution with respect to the bulk effective Hamiltonian corresponds to mean field evolution in the holographic theory. In a particular model for the holographic theory, which exhibits fast scrambling, the decoherence time coincides with the scrambling time. The results support the hypothesis that decoherence of the infalling holographic state and disruptive bulk effects near the curvature singularity are comple-mentary descriptions of the same physics, which is an important step toward resolving the black hole information paradox.
Quark lepton complementarity and renormalization group effects
Schmidt, Michael A.; Smirnov, Alexei Yu.
2006-12-01
We consider a scenario for the quark-lepton complementarity relations between mixing angles in which the bimaximal mixing follows from the neutrino mass matrix. According to this scenario in the lowest order the angle {theta}{sub 12} is {approx}1{sigma} (1.5 degree sign -2 degree sign ) above the best fit point coinciding practically with the tribimaximal mixing prediction. Realization of this scenario in the context of the seesaw type-I mechanism with leptonic Dirac mass matrices approximately equal to the quark mass matrices is studied. We calculate the renormalization group corrections to {theta}{sub 12} as well as to {theta}{sub 13} in the standard model (SM) and minimal supersymmetric standard model (MSSM). We find that in a large part of the parameter space corrections {delta}{theta}{sub 12} are small or negligible. In the MSSM version of the scenario, the correction {delta}{theta}{sub 12} is in general positive. Small negative corrections appear in the case of an inverted mass hierarchy and opposite CP parities of {nu}{sub 1} and {nu}{sub 2} when leading contributions to {theta}{sub 12} running are strongly suppressed. The corrections are negative in the SM version in a large part of the parameter space for values of the relative CP phase of {nu}{sub 1} and {nu}{sub 2}: {phi}>{pi}/2.
A holographic model for black hole complementarity
Lowe, David A.; Thorlacius, Larus
2016-12-07
Here, we explore a version of black hole complementarity, where an approximate semiclassical effective field theory for interior infalling degrees of freedom emerges holo-graphically from an exact evolution of exterior degrees of freedom. The infalling degrees of freedom have a complementary description in terms of outgoing Hawking radiation and must eventually decohere with respect to the exterior Hamiltonian, leading to a breakdown of the semiclassical description for an infaller. Trace distance is used to quantify the difference between the complementary time evolutions, and to define a decoherence time. We propose a dictionary where the evolution with respect to the bulkmore » effective Hamiltonian corresponds to mean field evolution in the holographic theory. In a particular model for the holographic theory, which exhibits fast scrambling, the decoherence time coincides with the scrambling time. The results support the hypothesis that decoherence of the infalling holographic state and disruptive bulk effects near the curvature singularity are comple-mentary descriptions of the same physics, which is an important step toward resolving the black hole information paradox.« less
Bohr's Principle of Complementarity and Beyond
NASA Astrophysics Data System (ADS)
Jones, R.
2004-05-01
All knowledge is of an approximate character and always will be (Russell, Human Knowledge, 1948, pg 497,507). The laws of nature are not unique (Smolin, Three Roads to Quantum Gravity, 2001, pg 195). There may be a number of different sets of equations which describe our data just as well as the present known laws do (Mitchell, Machine Learning, 1997, pg 65-66 and Cooper, Machine Learning, Vol. 9, 1992, pg 319) In the future every field of intellectual study will possess multiple theories of its domain and scientific work and engineering will be performed based on the ensemble predictions of ALL of these. In some cases the theories may be quite divergent, differing greatly one from the other. The idea can be considered an extension of Bohr's notions of complementarity, "...different experimental arrangements.. described by different physical concepts...together and only together exhaust the definable information we can obtain about the object" (Folse, The Philosophy of Niels Bohr, 1985, pg 238). This idea is not postmodernism. Witchdoctor's theories will not form a part of medical science. Objective data, not human opinion, will decide which theories we use and how we weight their predictions.
Complementarity, wave-particle duality, and domains of applicability
NASA Astrophysics Data System (ADS)
Bokulich, Peter
2017-08-01
Complementarity has frequently, but mistakenly, been conflated with wave-particle duality, and this conflation has led to pervasive misunderstandings of Bohr's views and several misguided claims of an experimental "disproof" of complementarity. In this paper, I explain what Bohr meant by complementarity, and how this is related to, but distinct from, wave-particle duality. I list a variety of possible meanings of wave-particle duality, and canvass the ways in which they are (or are not) supported by quantum physics and Bohr's interpretation. I also examine the extent to which wave-particle duality should be viewed as an example of the sort of dualities one finds in, e.g., string theory. I argue that the most fruitful way of reading of Bohr's account complementarity is by comparing it to current accounts of effective theories with limited domains of applicability.
Experimental test of Bohr's complementarity principle with single neutral atoms
NASA Astrophysics Data System (ADS)
Wang, Zhihui; Tian, Yali; Yang, Chen; Zhang, Pengfei; Li, Gang; Zhang, Tiancai
2016-12-01
An experimental test of the quantum complementarity principle based on single neutral atoms trapped in a blue detuned bottle trap was here performed. A Ramsey interferometer was used to assess the wavelike behavior or particlelike behavior with second π /2 rotation on or off. The wavelike behavior or particlelike behavior is characterized by the visibility V of the interference or the predictability P of which-path information, respectively. The measured results fulfill the complementarity relation P2+V2≤1 . Imbalance losses were deliberately introduced to the system and we find the complementarity relation is then formally "violated." All the experimental results can be completely explained theoretically by quantum mechanics without considering the interference between wave and particle behaviors. This observation complements existing information concerning Bohr's complementarity principle based on wave-particle duality of a massive quantum system.
Parallel Sparse Linear System and Eigenvalue Problem Solvers: From Multicore to Petascale Computing
2015-06-01
problems that achieve high performance on a single multicore node and clusters of many multicore nodes. Further, we demonstrate both the superior...symmetric eigenvalue problems that achieve high performance on a single multicore node and clusters of many multicore nodes. Further, we demonstrate both...improvement of 24 if we use the same single node with 80 cores, and a speed improvement of 10.4 if we use a cluster of 8 nodes in which each node
NASA Technical Reports Server (NTRS)
Pfeil, W. H.; De Los Reyes, G.; Bobula, G. A.
1985-01-01
A power turbine governor was designed for a recent-technology turboshaft engine coupled to a modern, articulated rotor system using Linear Quadratic Regulator (LQR) and Kalman Filter (KF) techniques. A linear, state-space model of the engine and rotor system was derived for six engine power settings from flight idle to maximum continuous. An integrator was appended to the fuel flow input to reduce the steady-state governor error to zero. Feedback gains were calculated for the system states at each power setting using the LQR technique. The main rotor tip speed state is not measurable, so a Kalman Filter of the rotor was used to estimate this state. The crossover of the system was increased to 10 rad/s compared to 2 rad/sec for a current governor. Initial computer simulations with a nonlinear engine model indicate a significant decrease in power turbine speed variation with the LQR governor compared to a conventional governor.
Beam dynamics in super-conducting linear accelerator: Problems and solutions
NASA Astrophysics Data System (ADS)
Senichev, Yu.; Bogdanov, A.; Maier, R.; Vasyukhin, N.
2006-03-01
The linac based on SC cavities has special features. Due to specific requirements the SC cavity is desirable to have a constant geometry of the accelerating cells with limited family number of cavities. All cavities are divided into modules, and each module is housed into one cryostat. First of all, such geometry of cavity leads to a non-synchronism. Secondly, the inter-cryostat drift space parametrically perturbs the longitudinal motion. In this article, we study the non-linear resonant effects due to the inter-cryostat drift space, using the separatrix formalism for a super-conducting linear accelerator [Yu. Senichev, A. Bogdanov, R. Maier, Phys. Rev. ST AB 6 (2003) 124001]. Methods to avoid or to compensate the resonant effect are also presented. We consider 3D beam dynamics together with space charge effects. The final lattice meets to all physical requirements.
General linear methods and friends: Toward efficient solutions of multiphysics problems
NASA Astrophysics Data System (ADS)
Sandu, Adrian
2017-07-01
Time dependent multiphysics partial differential equations are of great practical importance as they model diverse phenomena that appear in mechanical and chemical engineering, aeronautics, astrophysics, meteorology and oceanography, financial modeling, environmental sciences, etc. There is no single best time discretization for the complex multiphysics systems of practical interest. We discuss "multimethod" approaches that combine different time steps and discretizations using the rigourous frameworks provided by Partitioned General Linear Methods and Generalize-structure Additive Runge Kutta Methods..
The complementarity model of brain-body relationship.
Walach, Harald
2005-01-01
We introduce the complementarity concept to understand mind-body relations and the question why the biopsychosocial model has in fact been praised, but not integrated into medicine. By complementarity, we mean that two incompatible descriptions have to be used to describe something in full. The complementarity model states that the physical and the mental side of the human organism are two complementary notions. This contradicts the prevailing materialist notion that mental and psychological processes are emergent properties of an organism. The complementarity model also has consequences for a further understanding of biological processes. Complementarity is a defining property of quantum systems proper. Such systems exhibit correlated properties that result in coordinated behavior without signal transfer or interaction. This is termed EPR-correlation or entanglement. Weak quantum theory, a generalized version of quantum mechanics proper, predicts entanglement also for macroscopic systems, provided a local and a global observable are complementary. Thus, complementarity could be the key to understanding holistically correlated behavior on different levels of systemic complexity.
NASA Astrophysics Data System (ADS)
Beck, Lisa; Bulíček, Miroslav; Málek, Josef; Süli, Endre
2017-08-01
We investigate the properties of certain elliptic systems leading, a priori, to solutions that belong to the space of Radon measures. We show that if the problem is equipped with a so-called asymptotic radial structure, then the solution can in fact be understood as a standard weak solution, with one proviso: analogously to the case of minimal surface equations, the attainment of the boundary value is penalized by a measure supported on (a subset of) the boundary, which, for the class of problems under consideration here, is the part of the boundary where a Neumann boundary condition is imposed.
Why the Afshar experiment does not refute complementarity
NASA Astrophysics Data System (ADS)
Kastner, R. E.
A modified version of Young's experiment by Shahriar Afshar demonstrates that, prior to what appears to be a "which-way" measurement, an interference pattern exists. Afshar has claimed that this result constitutes a violation of the Principle of Complementarity. This paper discusses the implications of this experiment and considers how Cramer's Transactional Interpretation easily accommodates the result. It is also shown that the Afshar experiment is analogous in key respects to a spin one-half particle prepared as "spin up along x ", subjected to a nondestructive confirmation of that preparation, and post-selected in a specific state of spin along z . The terminology "which-way" or "which-slit" is critiqued; it is argued that this usage by both Afshar and his critics is misleading and has contributed to confusion surrounding the interpretation of the experiment. Nevertheless, it is concluded that Bohr would have had no more problem accounting for the Afshar result than he would in accounting for the aforementioned pre- and post-selection spin experiment, in which the particle's preparation state is confirmed by a nondestructive measurement prior to post-selection. In addition, some new inferences about the interpretation of delayed choice experiments are drawn from the analysis.
Causal patch complementarity: The inside story for old black holes
NASA Astrophysics Data System (ADS)
Ilgin, Irfan; Yang, I.-Sheng
2014-02-01
We carefully analyze the causal patches which belong to observers falling into an old black hole. We show that without a distillation-like process, the Almheiri-Marolf-Polchinski-Sully (AMPS) paradox cannot challenge complementarity. That is because the two ingredients for the paradox, the interior region and the early Hawking radiation, cannot be spacelike separated and both low energy within any single causal patch. Either the early quanta have Planckian wavelengths, or the interior region is exponentially smaller than the Schwarzschild size. This means that their appearances in the low-energy theory are strictly timelike separated, which nullifies the problem of double entanglement/purity or quantum cloning. This verifies that the AMPS paradox is either only a paradox in the global description like the original information paradox, or a direct consequence of the assumption that a distillation process is feasible without hidden consequences. We discuss possible relations to cosmological causal patches and the possibility of transferring energy without transferring quantum information.
Linear and nonlinear pattern selection in Rayleigh-Benard stability problems
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
1993-01-01
A new algorithm is introduced to compute finite-amplitude states using primitive variables for Rayleigh-Benard convection on relatively coarse meshes. The algorithm is based on a finite-difference matrix-splitting approach that separates all physical and dimensional effects into one-dimensional subsets. The nonlinear pattern selection process for steady convection in an air-filled square cavity with insulated side walls is investigated for Rayleigh numbers up to 20,000. The internalization of disturbances that evolve into coherent patterns is investigated and transient solutions from linear perturbation theory are compared with and contrasted to the full numerical simulations.
A non-local non-autonomous diffusion problem: linear and sublinear cases
NASA Astrophysics Data System (ADS)
Figueiredo-Sousa, Tarcyana S.; Morales-Rodrigo, Cristian; Suárez, Antonio
2017-10-01
In this work we investigate an elliptic problem with a non-local non-autonomous diffusion coefficient. Mainly, we use bifurcation arguments to obtain existence of positive solutions. The structure of the set of positive solutions depends strongly on the balance between the non-local and the reaction terms.
ERIC Educational Resources Information Center
Lawrence, Virginia
No longer just a user of commercial software, the 21st century teacher is a designer of interactive software based on theories of learning. This software, a comprehensive study of straightline equations, enhances conceptual understanding, sketching, graphic interpretive and word problem solving skills as well as making connections to real-life and…
A Longitudinal Solution to the Problem of Differential Linear Growth Patterns in Quasi-Experiments.
ERIC Educational Resources Information Center
Olejnik, Stephen; Porter, Andrew C.
Differential achievement growth patterns between comparison groups is a problem associated with data analysis in compensatory education programs. Children in greatest need of additional assistance, are usually assigned to the program rather than to an alternative treatment so that the comparison groups may vary in several ways, in addition to the…
ERIC Educational Resources Information Center
Stamovlasis, Dimitrios
2010-01-01
The aim of the present paper is two-fold. First, it attempts to support previous findings on the role of some psychometric variables, such as, M-capacity, the degree of field dependence-independence, logical thinking and the mobility-fixity dimension, on students' achievement in chemistry problem solving. Second, the paper aims to raise some…
ERIC Educational Resources Information Center
Fan, Xitao; Wang, Lin
The Monte Carlo study compared the performance of predictive discriminant analysis (PDA) and that of logistic regression (LR) for the two-group classification problem. Prior probabilities were used for classification, but the cost of misclassification was assumed to be equal. The study used a fully crossed three-factor experimental design (with…
Fan, Yurui; Huang, Guohe; Veawab, Amornvadee
2012-01-01
In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.
Fitting of dihedral terms in classical force fields as an analytic linear least-squares problem.
Hopkins, Chad W; Roitberg, Adrian E
2014-07-28
The derivation and optimization of most energy terms in modern force fields are aided by automated computational tools. It is therefore important to have algorithms to rapidly and precisely train large numbers of interconnected parameters to allow investigators to make better decisions about the content of molecular models. In particular, the traditional approach to deriving dihedral parameters has been a least-squares fit to target conformational energies through variational optimization strategies. We present a computational approach for simultaneously fitting force field dihedral amplitudes and phase constants which is analytic within the scope of the data set. This approach completes the optimal molecular mechanics representation of a quantum mechanical potential energy surface in a single linear least-squares fit by recasting the dihedral potential into a linear function in the parameters. We compare the resulting method to a genetic algorithm in terms of computational time and quality of fit for two simple molecules. As suggested in previous studies, arbitrary dihedral phases are only necessary when modeling chiral molecules, which include more than half of drugs currently in use, so we also examined a dihedral parametrization case for the drug amoxicillin and one of its stereoisomers where the target dihedral includes a chiral center. Asymmetric dihedral phases are needed in these types of cases to properly represent the quantum mechanical energy surface and to differentiate between stereoisomers about the chiral center.
Essential growth rate for bounded linear perturbation of non-densely defined Cauchy problems
NASA Astrophysics Data System (ADS)
Ducrot, A.; Liu, Z.; Magal, P.
2008-05-01
This paper is devoted to the study of the essential growth rate of some class of semigroup generated by bounded perturbation of some non-densely defined problem. We extend some previous results due to Thieme [H.R. Thieme, Quasi-compact semigroups via bounded perturbation, in: Advances in Mathematical Population Dynamics--Molecules, Cells and Man, Houston, TX, 1995, in: Ser. Math. Biol. Med., vol. 6, World Sci. Publishing, River Edge, NJ, 1997, pp. 691-711] to a class of non-densely defined Cauchy problems in Lp. In particular in the context the integrated semigroup is not operator norm locally Lipschitz continuous. We overcome the lack of Lipschitz continuity of the integrated semigroup by deriving some weaker properties that are sufficient to give information on the essential growth rate.
NASA Astrophysics Data System (ADS)
Umbarkar, A. J.; Balande, U. T.; Seth, P. D.
2017-06-01
The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.
The incomplete inverse and its applications to the linear least squares problem
NASA Technical Reports Server (NTRS)
Morduch, G. E.
1977-01-01
A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.
A linear decomposition method for large optimization problems. Blueprint for development
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1982-01-01
A method is proposed for decomposing large optimization problems encountered in the design of engineering systems such as an aircraft into a number of smaller subproblems. The decomposition is achieved by organizing the problem and the subordinated subproblems in a tree hierarchy and optimizing each subsystem separately. Coupling of the subproblems is accounted for by subsequent optimization of the entire system based on sensitivities of the suboptimization problem solutions at each level of the tree to variables of the next higher level. A formalization of the procedure suitable for computer implementation is developed and the state of readiness of the implementation building blocks is reviewed showing that the ingredients for the development are on the shelf. The decomposition method is also shown to be compatible with the natural human organization of the design process of engineering systems. The method is also examined with respect to the trends in computer hardware and software progress to point out that its efficiency can be amplified by network computing using parallel processors.
The nonconforming linear strain tetrahedron for a large deformation elasticity problem
NASA Astrophysics Data System (ADS)
Hansbo, Peter; Larsson, Fredrik
2016-12-01
In this paper we investigate the performance of the nonconforming linear strain tetrahedron element introduced by Hansbo (Comput Methods Appl Mech Eng 200(9-12):1311-1316, 2011; J Numer Methods Eng 91(10):1105-1114, 2012). This approximation uses midpoints of edges on tetrahedra in three dimensions with either point continuity or mean continuity along edges of the tetrahedra. Since it contains (rotated) bilinear terms it performs substantially better than the standard constant strain element in bending. It also allows for under-integration in the form of one point Gauss integration of volumetric terms in near incompressible situations. We combine under-integration of the volumetric terms with houglass stabilization for the isochoric terms.
Fowler, Patrick W; Myrvold, Wendy
2011-11-17
Conjugated-circuit models for induced π ring currents differ in the types of circuit that they include and the weights attached to them. Choice of circuits for general π systems can be expressed compactly in terms of matchings of the circuit-deleted molecular graph. Variants of the conjugated-circuit model for induced π currents are shown to have simple closed-form solutions for linear polyacenes. Despite differing assumptions about the effect of cycle area, all the models predict the most intense perimeter current in the central rings, in general agreement with ab initio current-density maps. All tend to overestimate the rate of increase with N of the central ring current for the [N]polyacene, in comparison with molecular-orbital treatments using ipsocentric ab initio, pseudo-π, and Hückel-London approaches.
NASA Astrophysics Data System (ADS)
Ibáñez, J.; Hernández, V.; Arias, E.; Ruiz, P. A.
2009-05-01
Many scientific and engineering problems are described using Ordinary Differential Equations (ODEs), where the analytic solution is unknown. Much research has been done by the scientific community on developing numerical methods which can provide an approximate solution of the original ODE. In this work, two approaches have been considered based on BDF and Piecewise-linearized Methods. The approach based on BDF methods uses a Chord-Shamanskii iteration for computing the nonlinear system which is obtained when the BDF schema is used. Two approaches based on piecewise-linearized methods have also been considered. These approaches are based on a theorem proved in this paper which allows to compute the approximate solution at each time step by means of a block-oriented method based on diagonal Padé approximations. The difference between these implementations is in using or not using the scale and squaring technique. Five algorithms based on these approaches have been developed. MATLAB and Fortran versions of the above algorithms have been developed, comparing both precision and computational costs. BLAS and LAPACK libraries have been used in Fortran implementations. In order to compare in equality of conditions all implementations, algorithms with fixed step have been considered. Four of the five case studies analyzed come from biology and chemical kinetics stiff problems. Experimental results show the advantages of the proposed algorithms, especially when they are integrating stiff problems.
NASA Astrophysics Data System (ADS)
Ådnøy Ellingsen, Simen; Li, Yan; Smeltzer, Benjamin K.
2017-04-01
We compare different methods of approximating the dispersion relation for waves on top of currents whose direction and magnitude may vary arbitrarily with depth. Two fundamentally different approximation philosophies are in use: analytical approximation schemes, and what we term the N-layer procedure in which the velocity profile is approximated by a continuous, piecewise linear function of depth. The relative virtues of both schemes are reviewed. The N-layer procedure yields the dispersion relation with arbitrary accuracy. We present the details and subtleties of implementing this procedure in practice. We find with a good choice of layer boundaries, 4-5 layers are sufficient for accuracy of about 1%. For inhomogeneous systems with a specified source, implementation is straightforward and most complications are eschewed. Analytical approximation schemes are reviewed, and criteria of applicability are derived for the first time. In particular the much used approximation by Kirby & Chen (1989) (KCA) is compared with a new approximation which we propose. The two give similar predictions when the KCA is applicable, but our new scheme is more robust and can handle several special but realistic cases where the KCA fails. Once the dispersion relation is calculated, 3D linear problems such as initial value problems, or problems with stationary or periodic time dependence can be readily solved.
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
NASA Technical Reports Server (NTRS)
Heaslet, Max A; Lomax, Harvard
1950-01-01
Following the introduction of the linearized partial differential equation for nonsteady three-dimensional compressible flow, general methods of solution are given for the two and three-dimensional steady-state and two-dimensional unsteady-state equations. It is also pointed out that, in the absence of thickness effects, linear theory yields solutions consistent with the assumptions made when applied to lifting-surface problems for swept-back plan forms at sonic speeds. The solutions of the particular equations are determined in all cases by means of Green's theorem, and thus depend on the use of Green's equivalent layer of sources, sinks, and doublets. Improper integrals in the supersonic theory are treated by means of Hadamard's "finite part" technique.
Non-Linear Problems in NMR: Application of the DFM Variation of Parameters Method
NASA Astrophysics Data System (ADS)
Erker, Jay Charles
This Dissertation introduces, develops, and applies the Dirac-McLachlan-Frenkel (DFM) time dependent variation of parameters approach to Nuclear Magnetic Resonance (NMR) problems. Although never explicitly used in the treatment of time domain NMR problems to date, the DFM approach has successfully predicted the dynamics of optically prepared wave packets on excited state molecular energy surfaces. Unlike the Floquet, average Hamiltonian, and Van Vleck transformation methods, the DFM approach is not restricted by either the size or symmetry of the time domain perturbation. A particularly attractive feature of the DFM method is that measured data can be used to motivate a parameterized trial function choice and that the DFM theory provides the machinery to provide the optimum, minimum error choices for these parameters. Indeed a poor parameterized trial function choice will lead to a poor match with real experiments, even with optimized parameters. Although there are many NMR problems available to demonstrate the application of the DFM variation of parameters, five separate cases that have escaped analytical solution and thus require numerical methods are considered here: molecular diffusion in a magnetic field gradient, radiation damping in the presence of inhomogeneous broadening, multi-site chemical exchange, and the combination of molecular diffusion in a magnetic field gradient with chemical exchange. The application to diffusion in a gradient is used as an example to develop the DFM method for application to NMR. The existence of a known analytical solution and experimental results allows for direct comparison between the theoretical results of the DFM method and Torrey's solution to the Bloch equations corrected for molecular diffusion. The framework of writing classical Bloch equations in matrix notation is then applied to problems without analytical solution. The second example includes the generation of a semi-analytical functional form for the free
On the Analogy between Mathematical Problems of Non-Linear Filtering and Quantum Physics.
1980-06-01
3.1h) in a form which brings out the S1 2 *12 commutation properties of L - L and L If we denote by L2 [L0 -1 L,L]02 1 by 0271 1 and L3 = [LIL 2...unbounded observation operators h it is this equation which is the easiest to deal with. The above also shows that the commutators L2 and L3 have an... commutators makes it clear that for certain problems these equations can be integrated using group inveiance methods. (iv) The idea of using gauge
Exact analysis to any order of the linear coupling problem in the thin lens model
Ruggiero, A.G.
1991-12-31
In this report we attempt the exact solution of the motion of a charged particle in a circular accelerator under the effects of skew quadrupole errors. We adopt the model of error distributions, lumped in locations with zero extensions. This thin-lens approximation provides an analytical insight to the problem to any order. The total solution is expressed in terms of driving terms which are actually correlation factors to several order. An application follows on the calculation and correction of tune-splitting and on the estimate of the role the higher-order terms play in the correction method.
Exact analysis to any order of the linear coupling problem in the thin lens model
Ruggiero, A.G.
1991-01-01
In this report we attempt the exact solution of the motion of a charged particle in a circular accelerator under the effects of skew quadrupole errors. We adopt the model of error distributions, lumped in locations with zero extensions. This thin-lens approximation provides an analytical insight to the problem to any order. The total solution is expressed in terms of driving terms which are actually correlation factors to several order. An application follows on the calculation and correction of tune-splitting and on the estimate of the role the higher-order terms play in the correction method.
1986-01-01
1985), 1-44. [19] V. Majer, Numerical solution of boundary value problems for ordinary differential equations of nonlinear elasticity, Ph.D. Thesis, Univ...based on the ffactoriza- tion method. 1 INTRODUCTION 1.1 Numerical methods for linear boundary value problems for ordinary differential equations The...numerical solution of linear boundary value problems for ordinary differential eqIuations are presented. The methods are optimal with respect to certain
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.
2010-01-01
The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.
Extended cubic B-spline method for solving a linear system of second-order boundary value problems.
Heilat, Ahmed Salem; Hamid, Nur Nadiah Abd; Ismail, Ahmad Izani Md
2016-01-01
A method based on extended cubic B-spline is proposed to solve a linear system of second-order boundary value problems. In this method, two free parameters, [Formula: see text] and [Formula: see text], play an important role in producing accurate results. Optimization of these parameters are carried out and the truncation error is calculated. This method is tested on three examples. The examples suggest that this method produces comparable or more accurate results than cubic B-spline and some other methods.
NASA Astrophysics Data System (ADS)
Gardner, Robin P.; Xu, Libai
2009-10-01
The Center for Engineering Applications of Radioisotopes (CEAR) has been working for over a decade on the Monte Carlo library least-squares (MCLLS) approach for treating non-linear radiation analyzer problems including: (1) prompt gamma-ray neutron activation analysis (PGNAA) for bulk analysis, (2) energy-dispersive X-ray fluorescence (EDXRF) analyzers, and (3) carbon/oxygen tool analysis in oil well logging. This approach essentially consists of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required background libraries. These libraries are then used in the linear library least-squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. Iterations of this are used until the LLS values agree with the composition used to generate the libraries. The current status of the methods (and topics) necessary to implement the MCLLS approach is reported. This includes: (1) the Monte Carlo codes such as CEARXRF, CEARCPG, and CEARCO for forward generation of the necessary elemental library spectra for the LLS calculation for X-ray fluorescence, neutron capture prompt gamma-ray analyzers, and carbon/oxygen tools; (2) the correction of spectral pulse pile-up (PPU) distortion by Monte Carlo simulation with the code CEARIPPU; (3) generation of detector response functions (DRF) for detectors with linear and non-linear responses for Monte Carlo simulation of pulse-height spectra; and (4) the use of the differential operator (DO) technique to make the necessary iterations for non-linear responses practical. In addition to commonly analyzed single spectra, coincidence spectra or even two-dimensional (2-D) coincidence spectra can also be used in the MCLLS approach and may provide more accurate results.
Herman, Gabor T; Chen, Wei
2008-03-01
The goal of Intensity-Modulated Radiation Therapy (IMRT) is to deliver sufficient doses to tumors to kill them, but without causing irreparable damage to critical organs. This requirement can be formulated as a linear feasibility problem. The sequential (i.e., iteratively treating the constraints one after another in a cyclic fashion) algorithm ART3 is known to find a solution to such problems in a finite number of steps, provided that the feasible region is full dimensional. We present a faster algorithm called ART3+. The idea of ART3+ is to avoid unnecessary checks on constraints that are likely to be satisfied. The superior performance of the new algorithm is demonstrated by mathematical experiments inspired by the IMRT application.
Samet Y. Kadioglu; Robert R. Nourgaliev; Vincent A. Mousseau
2008-03-01
We perform a comparative study for the harmonic versus arithmetic averaging of the heat conduction coefficient when solving non-linear heat transfer problems. In literature, the harmonic average is the method of choice, because it is widely believed that the harmonic average is more accurate model. However, our analysis reveals that this is not necessarily true. For instance, we show a case in which the harmonic average is less accurate when a coarser mesh is used. More importantly, we demonstrated that if the boundary layers are finely resolved, then the harmonic and arithmetic averaging techniques are identical in the truncation error sense. Our analysis further reveals that the accuracy of these two techniques depends on how the physical problem is modeled.
NASA Astrophysics Data System (ADS)
Tidriri, M. D.
2003-07-01
In this paper, we establish the error estimates for the generalized hybrid finite element/finite volume methods we have introduced in our earlier work (J. Comput. Appl. Math. 139 (2002) 323; Comm. Appl. Anal. 5(1) (2001) 91). These estimates are obtained for linear hyperbolic and convection-dominated convection-diffusion problems. Our analysis is performed for general mesh of a bounded polygonal domain of satisfying the minimum angle condition. Our errors estimates are new and represent significant improvements over the previously known error estimates established for the streamline diffusion and discontinuous Galerkin methods applied to hyperbolic and convection dominated problems (Math. Comp. 46 (1986) 1; Comput. Methods Appl. Mech. Eng. 45 (1984) 285; in: C. de Boor (Ed.), Mathematical Aspects of Finite Elements in Partial Differential Equations, Academic Press, New York, 1974).
NASA Astrophysics Data System (ADS)
Halpern, Paul
2017-01-01
In 1978, John Wheeler proposed the delayed-choice thought experiment as a generalization of the classic double slit experiment intended to help elucidate the nature of decision making in quantum measurement. In particular, he wished to illustrate how a decision made after a quantum system was prepared might retrospectively affect the outcome. He extended his methods to the universe itself, raising the question of whether the universe is a ``self-excited circuit'' in which scientific measurements in the present affect the quantum dynamics in the past. In this talk we'll show how Wheeler's approach revived the notion of Bohr's complementarity, which had by then faded from the prevailing discourse of quantum measurement theory. Wheeler's advocacy reflected, in part, his wish to eliminate the divide in quantum theory between measurer and what was being measured, bringing greater consistency to the ideas of Bohr, a mentor whom he deeply respected.
Modeling Granular Materials as Compressible Non-Linear Fluids: Heat Transfer Boundary Value Problems
Massoudi, M.C.; Tran, P.X.
2006-01-01
We discuss three boundary value problems in the flow and heat transfer analysis in flowing granular materials: (i) the flow down an inclined plane with radiation effects at the free surface; (ii) the natural convection flow between two heated vertical walls; (iii) the shearing motion between two horizontal flat plates with heat conduction. It is assumed that the material behaves like a continuum, similar to a compressible nonlinear fluid where the effects of density gradients are incorporated in the stress tensor. For a fully developed flow the equations are simplified to a system of three nonlinear ordinary differential equations. The equations are made dimensionless and a parametric study is performed where the effects of various dimensionless numbers representing the effects of heat conduction, viscous dissipation, radiation, and so forth are presented.
Cost Cumulant-Based Control for a Class of Linear Quadratic Tracking Problems
2006-08-04
with the initial condition (t0, x0; u) ∈ [t0, tf ] × Rn × L2Ft(Ω; C([t0, tf ];Rm)) is a traditional finite-horizon IQF random cost J : [t0, tf ] × Rn...x0 , (5) y(t) = C(t)x(t) , (6) and the IQF random cost J(t0, x0; K, uext) = [z(tf )− y(tf )]T Qf [z(tf )− y(tf )] + ∫ tf t0 { [z(τ)− y(τ)]T Q(τ) [z(τ...track the prescribed signal z(t) with the finite-horizon IQF cost (7). For k ∈ Z+ fixed, the kth cost cumulant in the tracking problem is given κk(t0
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
On the Character of Quantum Law: Complementarity, Entanglement, and Information
NASA Astrophysics Data System (ADS)
Plotnitsky, Arkady
2017-08-01
This article considers the relationships between the character of physical law in quantum theory and Bohr's concept of complementarity, under the assumption of the unrepresentable and possibly inconceivable nature of quantum objects and processes, an assumption that may be seen as the most radical departure from realism currently available. Complementarity, the article argues, is a reflection of the fact that, as against classical physics or relativity, the behavior of quantum objects of the same type, say, all electrons, is not governed by the same physical law in all contexts, specifically in complementary contexts. On the other hand, the mathematical formalism of quantum mechanics offers correct probabilistic or statistical predictions (no other predictions are possible on experimental grounds) in all contexts, here, again, under the assumption that quantum objects themselves and their behavior are beyond representation or even conception. Bohr, in this connection, spoke of "an entirely new situation as regards the description of physical phenomena that, the notion of complementarity aims at characterizing." The article also considers the relationships among complementarity, entanglement, and quantum information, by basing these relationships on this understanding of complementarity.
Female choice for genetic complementarity in birds: a review.
Mays, Herman L; Albrecht, Tomas; Liu, Mark; Hill, Geoffrey E
2008-09-01
Data from avian species have played a prominent role in developing and testing theories of female mate choice. One of the most prominent models of sexual selection, the "good genes" model, emphasizes the indirect benefits of female preferences for male ornaments as indicators of a potential sire's additive genetic quality. However, there is growing interest in non-additive sources of genetic quality and mate choice models for self-referential disassortative mating based on optimal levels of genetic dissimilarity. We reviewed the empirical evidence for genetic-complementarity-based female mate choice among birds. We found the evidence for such choice is mixed but in general against the genetic complementarity hypothesis. The lack of evidence for genetic complementarity in many birds may be due to an inability to make the fine distinctions among potential mates based on genes, possibly due to the comparative anosmatic nature of avian sensory system. For some species however there is compelling evidence for genetic complementarity as a criterion used in female mate choice. Understanding the ubiquity of female mate choice based on genetic complementarity and the variation in this source of female preference among and within species remains a challenge.
Data-driven non-linear elasticity: constitutive manifold construction and problem discretization
NASA Astrophysics Data System (ADS)
Ibañez, Ruben; Borzacchiello, Domenico; Aguado, Jose Vicente; Abisset-Chavanne, Emmanuelle; Cueto, Elias; Ladeveze, Pierre; Chinesta, Francisco
2017-07-01
The use of constitutive equations calibrated from data has been implemented into standard numerical solvers for successfully addressing a variety problems encountered in simulation-based engineering sciences (SBES). However, the complexity remains constantly increasing due to the need of increasingly detailed models as well as the use of engineered materials. Data-Driven simulation constitutes a potential change of paradigm in SBES. Standard simulation in computational mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy,\\ldots ), whereas the second one consists of models that scientists have extracted from collected, either natural or synthetic, data. Data-driven (or data-intensive) simulation consists of directly linking experimental data to computers in order to perform numerical simulations. These simulations will employ laws, universally recognized as epistemic, while minimizing the need of explicit, often phenomenological, models. The main drawback of such an approach is the large amount of required data, some of them inaccessible from the nowadays testing facilities. Such difficulty can be circumvented in many cases, and in any case alleviated, by considering complex tests, collecting as many data as possible and then using a data-driven inverse approach in order to generate the whole constitutive manifold from few complex experimental tests, as discussed in the present work.
NASA Astrophysics Data System (ADS)
Foufoula-Georgiou, Efi; Schwenk, Jon; Tejedor, Alejandro
2015-04-01
Are the dynamics of meandering rivers non-linear? What information does the shape of an oxbow lake carry about its forming process? How to characterize self-dissimilar landscapes carrying the signature of larger-scale geologic or tectonic controls? Do we have proper frameworks for quantifying the topology and dynamics of deltaic systems? What can the structural complexity of river networks (erosional and depositional) reveal about their vulnerability and response to change? Can the structure and dynamics of river networks reveal potential hotspots of geomorphic change? All of the above problems are at the heart of understanding landscape evolution, relating process to structure and form, and developing methodologies for inferring how a system might respond to future changes. We argue that a new surge of rigorous methodologies is needed to address these problems. The innovations introduced herein are: (1) gradual wavelet reconstruction for depicting threshold nonlinearity (due to cutoffs) versus inherent nonlinearity (due to underlying dynamics) in river meandering, (2) graph theory for studying the topology and dynamics of deltaic river networks and their response to change, and (3) Lagrangian approaches combined with topology and non-linear dynamics for inferring sediment-driven hotspots of geomorphic change.
Media complementarity and health information seeking in Puerto Rico.
Tian, Yan; Robinson, James D
2014-01-01
This investigation incorporates the Orientation1-Stimulus-Orientation2-Response model on the antecedents and outcomes of individual-level complementarity of media use in health information seeking. A secondary analysis of the Health Information National Trends Survey Puerto Rico data suggests that education and gender were positively associated with individual-level media complementarity of health information seeking, which, in turn, was positively associated with awareness of health concepts and organizations, and this awareness was positively associated with a specific health behavior: fruit and vegetable consumption. This study extends the research in media complementarity and health information use; it provides an integrative social psychological model empirically supported by the Health Information National Trends Survey Puerto Rico data.
Challenges to Bohr's Wave-Particle Complementarity Principle
NASA Astrophysics Data System (ADS)
Rabinowitz, Mario
2013-02-01
Contrary to Bohr's complementarity principle, in 1995 Rabinowitz proposed that by using entangled particles from the source it would be possible to determine which slit a particle goes through while still preserving the interference pattern in the Young's two slit experiment. In 2000, Kim et al. used spontaneous parametric down conversion to prepare entangled photons as their source, and almost achieved this. In 2012, Menzel et al. experimentally succeeded in doing this. When the source emits entangled particle pairs, the traversed slit is inferred from measurement of the entangled particle's location by using triangulation. The violation of complementarity breaches the prevailing probabilistic interpretation of quantum mechanics, and benefits Bohm's pilot-wave theory.
NASA Astrophysics Data System (ADS)
Benzaouia, Abdellah; Ouladsine, Mustapha; Ananou, Bouchra
2014-10-01
In this paper, fault tolerant control problem for discrete-time switching systems with delay is studied. Sufficient conditions of building an observer are obtained by using multiple Lyapunov function. These conditions are worked out in a new way, using cone complementarity technique, to obtain new LMIs with slack variables and multiple weighted residual matrices. The obtained results are applied on a numerical example showing fault detection, localisation of fault and reconfiguration of the control to maintain asymptotic stability even in the presence of a permanent sensor fault.
NASA Astrophysics Data System (ADS)
Helman, E. Udi
This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using
NASA Astrophysics Data System (ADS)
Sidoryakina, V. V.; Sukhinov, A. I.
2017-06-01
A two-dimensional linearized model of coastal sediment transport due to the action of waves is studied. Up till now, one-dimensional sediment transport models have been used. The model under study makes allowance for complicated bottom relief, the porosity of the bottom sediment, the size and density of sediment particles, gravity, wave-generated shear stress, and other factors. For the corresponding initial-boundary value problem the uniqueness of a solution is proved, and an a priori estimate for the solution norm is obtained depending on integral estimates of the right-hand side, boundary conditions, and the norm of the initial condition. A conservative difference scheme with weights is constructed that approximates the continuous initial-boundary value problem. Sufficient conditions for the stability of the scheme, which impose constraints on its time step, are given. Numerical experiments for test problems of bottom sediment transport and bottom relief transformation are performed. The numerical results agree with actual physical experiments.
Bao, Yan; von Stosch, Alexandra; Park, Mona; Pöppel, Ernst
2017-01-01
In experimental aesthetics the relationship between the arts and cognitive neuroscience has gained particular interest in recent years. But has cognitive neuroscience indeed something to offer when studying the arts? Here we present a theoretical frame within which the concept of complementarity as a generative or creative principle is proposed; neurocognitive processes are characterized by the duality of complementary activities like bottom-up and top-down control, or logistical functions like temporal control and content functions like perceptions in the neural machinery. On that basis a thought pattern is suggested for aesthetic appreciations and cognitive appraisals in general. This thought pattern is deeply rooted in the history of philosophy and art theory since antiquity; and complementarity also characterizes neural operations as basis for cognitive processes. We then discuss some challenges one is confronted with in experimental aesthetics; in our opinion, one serious problem is the lack of a taxonomy of functions in psychology and neuroscience which is generally accepted. This deficit makes it next to impossible to develop acceptable models which are similar to what has to be modeled. Another problem is the severe language bias in this field of research as knowledge gained in many languages over the ages remains inaccessible to most scientists. Thus, an inspection of research results or theoretical concepts is necessarily too narrow. In spite of these limitations we provide a selective summary of some results and viewpoints with a focus on visual art and its appreciation. It is described how questions of art and aesthetic appreciations using behavioral methods and in particular brain-imaging techniques are analyzed and evaluated focusing on such issues like the representation of artwork or affective experiences. Finally, we emphasize complementarity as a generative principle on a practical level when artists and scientists work directly together which can
Bao, Yan; von Stosch, Alexandra; Park, Mona; Pöppel, Ernst
2017-01-01
In experimental aesthetics the relationship between the arts and cognitive neuroscience has gained particular interest in recent years. But has cognitive neuroscience indeed something to offer when studying the arts? Here we present a theoretical frame within which the concept of complementarity as a generative or creative principle is proposed; neurocognitive processes are characterized by the duality of complementary activities like bottom-up and top-down control, or logistical functions like temporal control and content functions like perceptions in the neural machinery. On that basis a thought pattern is suggested for aesthetic appreciations and cognitive appraisals in general. This thought pattern is deeply rooted in the history of philosophy and art theory since antiquity; and complementarity also characterizes neural operations as basis for cognitive processes. We then discuss some challenges one is confronted with in experimental aesthetics; in our opinion, one serious problem is the lack of a taxonomy of functions in psychology and neuroscience which is generally accepted. This deficit makes it next to impossible to develop acceptable models which are similar to what has to be modeled. Another problem is the severe language bias in this field of research as knowledge gained in many languages over the ages remains inaccessible to most scientists. Thus, an inspection of research results or theoretical concepts is necessarily too narrow. In spite of these limitations we provide a selective summary of some results and viewpoints with a focus on visual art and its appreciation. It is described how questions of art and aesthetic appreciations using behavioral methods and in particular brain-imaging techniques are analyzed and evaluated focusing on such issues like the representation of artwork or affective experiences. Finally, we emphasize complementarity as a generative principle on a practical level when artists and scientists work directly together which can
Complementarity and the Formation of Chondrite Parent Bodies: A Window on Dust Coagulation
NASA Astrophysics Data System (ADS)
Hubbard, A.; Mac Low, M.-M.
2017-05-01
Complementarity implies that chondrules and matrix within given chondrites are co-genetic, drawn from a single mass reservoir. Complementarity also requires that chondrite assembly sample that mass reservoir evenly, which constraints dust growth.
NASA Astrophysics Data System (ADS)
Hadjiconstantinou, N. G.; Al-Mohssen, H. A.
2005-06-01
We investigate the time evolution of an impulsive start problem for arbitrary Knudsen numbers (Kn) using a linearized kinetic formulation. The early-time behaviour is described by a solution of the collisionless Boltzmann equation. The same solution can be used to describe the late-time behaviour for Kn ≫ 1. The late-time behaviour for Kn < 0.5 is captured by a newly proposed second-order slip model with no adjustable parameters. All theoretical results are verified by direct Monte Carlo solutions of the nonlinear Boltzmann equation. A measure of the timescale to steady state, normalized by the momentum diffusion timescale, shows that the timescale to steady state is significantly extended by ballistic transport, even at low Knudsen numbers where the latter is only important close to the system walls. This effect is captured for Kn < 0.5 by the slip model which predicts the equivalent effective domain size increase (slip length).
Blow-up rates of solutions of initial-boundary value problems for a quasi-linear parabolic equation
NASA Astrophysics Data System (ADS)
Anada, Koichi; Ishiwata, Tetsuya
2017-01-01
We consider initial-boundary value problems for a quasi linear parabolic equation, kt =k2 (kθθ + k), with zero Dirichlet boundary conditions and positive initial data. It has known that each of solutions blows up at a finite time with the rate faster than √{(T - t) - 1}. In this paper, it is proved that supθ k (θ , t) ≈√{(T - t) - 1 log log (T - t) - 1 } as t ↗ T under some assumptions. Our strategy is based on analysis for curve shortening flows that with self-crossing brought by S.B. Angenent and J.J.L. Velázquez. In addition, we prove some of numerical conjectures by Watterson which are keys to provide the blow-up rate.
NASA Astrophysics Data System (ADS)
Khan, Junaid Ali; Zahoor Raja, Muhammad Asif; Rashidi, Mohammad Mehdi; Syam, Muhammad Ibrahim; Majid Wazwaz, Abdul
2015-10-01
In this research, the well-known non-linear Lane-Emden-Fowler (LEF) equations are approximated by developing a nature-inspired stochastic computational intelligence algorithm. A trial solution of the model is formulated as an artificial feed-forward neural network model containing unknown adjustable parameters. From the LEF equation and its initial conditions, an energy function is constructed that is used in the algorithm for the optimisation of the networks in an unsupervised way. The proposed scheme is tested successfully by applying it on various test cases of initial value problems of LEF equations. The reliability and effectiveness of the scheme are validated through comprehensive statistical analysis. The obtained numerical results are in a good agreement with their corresponding exact solutions, which confirms the enhancement made by the proposed approach.
NASA Astrophysics Data System (ADS)
Barutello, Vivina; Jadanza, Riccardo D.; Portaluri, Alessandro
2016-01-01
It is well known that the linear stability of the Lagrangian elliptic solutions in the classical planar three-body problem depends on a mass parameter β and on the eccentricity e of the orbit. We consider only the circular case ( e = 0) but under the action of a broader family of singular potentials: α-homogeneous potentials, for α in (0, 2), and the logarithmic one. It turns out indeed that the Lagrangian circular orbit persists also in this more general setting. We discover a region of linear stability expressed in terms of the homogeneity parameter α and the mass parameter β, then we compute the Morse index of this orbit and of its iterates and we find that the boundary of the stability region is the envelope of a family of curves on which the Morse indices of the iterates jump. In order to conduct our analysis we rely on a Maslov-type index theory devised and developed by Y. Long, X. Hu and S. Sun; a key role is played by an appropriate index theorem and by some precise computations of suitable Maslov-type indices.
NASA Astrophysics Data System (ADS)
Holota, Petr; Nesvadba, Otakar
2017-04-01
The aim of this paper is to discuss the solution of the linearized gravimetric boundary value problem by means of the method of successive approximations. We start with the relation between the geometry of the solution domain and the structure of Laplace's operator. Similarly as in other branches of engineering and mathematical physics a transformation of coordinates is used that offers a possibility to solve an alternative between the boundary complexity and the complexity of the coefficients of the partial differential equation governing the solution. Laplace's operator has a relatively simple structure in terms of ellipsoidal coordinates which are frequently used in geodesy. However, the physical surface of the Earth substantially differs from an oblate ellipsoid of revolution, even if it is optimally fitted. Therefore, an alternative is discussed. A system of general curvilinear coordinates such that the physical surface of the Earth is imbedded in the family of coordinate surfaces is used. Clearly, the structure of Laplace's operator is more complicated in this case. It was deduced by means of tensor calculus and in a sense it represents the topography of the physical surface of the Earth. Nevertheless, the construction of the respective Green's function is more simple, if the solution domain is transformed. This enables the use of the classical Green's function method together with the method of successive approximations for the solution of the linear gravimetric boundary value problem expressed in terms of new coordinates. The structure of iteration steps is analyzed and where useful also modified by means of the integration by parts. Comparison with other methods is discussed.
On the complementarity of ECD and VCD techniques.
Nicu, Valentin Paul; Mándi, Attila; Kurtán, Tibor; Polavarapu, Prasad L
2014-09-01
An unprecedented complementarity of electronic circular dichroism (ECD) and vibrational circular dichroism (VCD) spectroscopic techniques is demonstrated by showing that each technique reveals the structure of a different molecular segment. Using a flexible molecule of biological significance we show that the synergetic use of ECD and VCD yields more complete structural characterization as it provides improved and more reliable conformer resolution.
Couple Complementarity and Similarity: A Review of the Literature.
ERIC Educational Resources Information Center
White, Stephen G.; Hatcher, Chris
1984-01-01
Examines couple complementarity and similarity, and their relationship to dyadic adjustment, from three perspectives: social/psychological research, clinical populations research, and the observations of family therapists. Methodological criticisms are discussed suggesting that the evidence for a relationship between similarity and…
Generalized uncertainty principle: implications for black hole complementarity
NASA Astrophysics Data System (ADS)
Chen, Pisin; Ong, Yen Chin; Yeom, Dong-han
2014-12-01
At the heart of the black hole information loss paradox and the firewall controversy lies the conflict between quantum mechanics and general relativity. Much has been said about quantum corrections to general relativity, but much less in the opposite direction. It is therefore crucial to examine possible corrections to quantum mechanics due to gravity. Indeed, the Heisenberg Uncertainty Principle is one profound feature of quantum mechanics, which nevertheless may receive correction when gravitational effects become important. Such generalized uncertainty principle [GUP] has been motivated from not only quite general considerations of quantum mechanics and gravity, but also string theoretic arguments. We examine the role of GUP in the context of black hole complementarity. We find that while complementarity can be violated by large N rescaling if one assumes only the Heisenberg's Uncertainty Principle, the application of GUP may save complementarity, but only if certain N -dependence is also assumed. This raises two important questions beyond the scope of this work, i.e., whether GUP really has the proposed form of N -dependence, and whether black hole complementarity is indeed correct.
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Paraschivoiu, Marius
1998-01-01
We present a finite element technique for the efficient generation of lower and upper bounds to outputs which are linear functionals of the solutions to the incompressible Stokes equations in two space dimensions; the finite element discretization is effected by Crouzeix-Raviart elements, the discontinuous pressure approximation of which is central to our approach. The bounds are based upon the construction of an augmented Lagrangian: the objective is a quadratic "energy" reformulation of the desired output; the constraints are the finite element equilibrium equations (including the incompressibility constraint), and the intersubdomain continuity conditions on velocity. Appeal to the dual max-min problem for appropriately chosen candidate Lagrange multipliers then yields inexpensive bounds for the output associated with a fine-mesh discretization; the Lagrange multipliers are generated by exploiting an associated coarse-mesh approximation. In addition to the requisite coarse-mesh calculations, the bound technique requires solution only of local subdomain Stokes problems on the fine-mesh. The method is illustrated for the Stokes equations, in which the outputs of interest are the flowrate past, and the lift force on, a body immersed in a channel.
Jan Hesthaven
2012-02-06
Final report for DOE Contract DE-FG02-98ER25346 entitled Parallel High Order Accuracy Methods Applied to Non-Linear Hyperbolic Equations and to Problems in Materials Sciences. Principal Investigator Jan S. Hesthaven Division of Applied Mathematics Brown University, Box F Providence, RI 02912 Jan.Hesthaven@Brown.edu February 6, 2012 Note: This grant was originally awarded to Professor David Gottlieb and the majority of the work envisioned reflects his original ideas. However, when Prof Gottlieb passed away in December 2008, Professor Hesthaven took over as PI to ensure proper mentoring of students and postdoctoral researchers already involved in the project. This unusual circumstance has naturally impacted the project and its timeline. However, as the report reflects, the planned work has been accomplished and some activities beyond the original scope have been pursued with success. Project overview and main results The effort in this project focuses on the development of high order accurate computational methods for the solution of hyperbolic equations with application to problems with strong shocks. While the methods are general, emphasis is on applications to gas dynamics with strong shocks.
NASA Astrophysics Data System (ADS)
Karterakis, Stefanos M.; Karatzas, George P.; Nikolos, Ioannis K.; Papadopoulou, Maria P.
2007-09-01
SummaryIn the past optimization techniques have been combined with simulation models to determine cost-effective solutions for various environmental management problems. In the present study, a groundwater management problem in a coastal karstic aquifer in Crere, Greece subject to environmental criteria has been studied using classical linear programming and heuristic optimization methodologies. A numerical simulation model of the unconfined coastal aquifer has been first developed to represent the complex non-linear physical system. Then the classical linear programming optimization algorithm of the Simplex method is used to solve the groundwater management problem where the main objective is the hydraulic control of the saltwater intrusion. A piecewise linearization of the non-linear optimization problem is obtained by sequential implementation of the Simplex algorithm and a convergence to the optimal solution is achieved. The solution of the non-linear management problem is also obtained using a heuristic algorithm. A Differential Evolution (DE) algorithm that emulates some of the principles of evolution is used. A comparison of the results obtained by the two different optimization approaches is presented. Finally, a sensitivity analysis is employed in order to examine the influence of the active pumping wells in the evolution of the seawater intrusion front along the coastline.
NASA Technical Reports Server (NTRS)
Lee, Y. M.
1971-01-01
Using a linearized theory of thermally and mechanically interacting mixture of linear elastic solid and viscous fluid, we derive a fundamental relation in an integral form called a reciprocity relation. This reciprocity relation relates the solution of one initial-boundary value problem with a given set of initial and boundary data to the solution of a second initial-boundary value problem corresponding to a different initial and boundary data for a given interacting mixture. From this general integral relation, reciprocity relations are derived for a heat-conducting linear elastic solid, and for a heat-conducting viscous fluid. An initial-boundary value problem is posed and solved for the mixture of linear elastic solid and viscous fluid. With the aid of the Laplace transform and the contour integration, a real integral representation for the displacement of the solid constituent is obtained as one of the principal results of the analysis.
Addona, Davide
2015-08-15
We obtain weighted uniform estimates for the gradient of the solutions to a class of linear parabolic Cauchy problems with unbounded coefficients. Such estimates are then used to prove existence and uniqueness of the mild solution to a semi-linear backward parabolic Cauchy problem, where the differential equation is the Hamilton–Jacobi–Bellman equation of a suitable optimal control problem. Via backward stochastic differential equations, we show that the mild solution is indeed the value function of the controlled equation and that the feedback law is verified.
Akcelik, Volkan; Flath, Pearl; Ghattas, Omar; Hill, Judith C; Van Bloemen Waanders, Bart; Wilcox, Lucas
2011-01-01
We consider the problem of estimating the uncertainty in large-scale linear statistical inverse problems with high-dimensional parameter spaces within the framework of Bayesian inference. When the noise and prior probability densities are Gaussian, the solution to the inverse problem is also Gaussian, and is thus characterized by the mean and covariance matrix of the posterior probability density. Unfortunately, explicitly computing the posterior covariance matrix requires as many forward solutions as there are parameters, and is thus prohibitive when the forward problem is expensive and the parameter dimension is large. However, for many ill-posed inverse problems, the Hessian matrix of the data misfit term has a spectrum that collapses rapidly to zero. We present a fast method for computation of an approximation to the posterior covariance that exploits the lowrank structure of the preconditioned (by the prior covariance) Hessian of the data misfit. Analysis of an infinite-dimensional model convection-diffusion problem, and numerical experiments on large-scale 3D convection-diffusion inverse problems with up to 1.5 million parameters, demonstrate that the number of forward PDE solves required for an accurate low-rank approximation is independent of the problem dimension. This permits scalable estimation of the uncertainty in large-scale ill-posed linear inverse problems at a small multiple (independent of the problem dimension) of the cost of solving the forward problem.
NASA Technical Reports Server (NTRS)
Hall, Philip
1989-01-01
Goertler vortices are thought to be the cause of transition in many fluid flows of practical importance. A review of the different stages of vortex growth is given. In the linear regime, nonparallel effects completely govern this growth, and parallel flow theories do not capture the essential features of the development of the vortices. A detailed comparison between the parallel and nonparallel theories is given and it is shown that at small vortex wavelengths, the parallel flow theories have some validity; otherwise nonparallel effects are dominant. New results for the receptivity problem for Goertler vortices are given; in particular vortices induced by free stream perturbations impinging on the leading edge of the walls are considered. It is found that the most dangerous mode of this type can be isolated and it's neutral curve is determined. This curve agrees very closely with the available experimental data. A discussion of the different regimes of growth of nonlinear vortices is also given. Again it is shown that, unless the vortex wavelength is small, nonparallel effects are dominant. Some new results for nonlinear vortices of 0(1) wavelengths are given and compared to experimental observations.
Saul Rosenzweig's purview: from experimenter/experimentee complementarity to idiodynamics.
Rosenzweig, Saul
2004-06-01
Following a brief personal biography, an exposition of Saul Rosenzweig's scientific contributions is presented. Starting in 1933 with experimenter/experimentee complementarity, this point of view was extended to implicit common factors in psychotherapy Rosenzweig (1936) then to the complementary pattern of the so-called schools of psychology Rosenzweig (1937). Similarly, converging approaches in personality theory emerged as another type of complementarity Rosenzweig (1944a). The three types of norms-nomothetic, demographic, and idiodynamic-within the range of dynamic human behavior were formulated and led to idiodynamics as a successor to personality theory. This formulation included the concept of the idioverse, defined as a self-creative and experiential population of events, which opened up a methodology (psychoarcheology) for reconstructing the creativity of outstanding scientific and artistic craftsmen like William James and Sigmund Freud, among psychologists, and Henry James, Herman Melville, and Nathaniel Hawthorne among writers of fiction.
The complementarity relations of quantum coherence in quantum information processing
Pan, Fei; Qiu, Liang; Liu, Zhi
2017-01-01
We establish two complementarity relations for the relative entropy of coherence in quantum information processing, i.e., quantum dense coding and teleportation. We first give an uncertainty-like expression relating local quantum coherence to the capacity of optimal dense coding for bipartite system. The relation can also be applied to the case of dense coding by using unital memoryless noisy quantum channels. Further, the relation between local quantum coherence and teleportation fidelity for two-qubit system is given. PMID:28272481
The complementarity relations of quantum coherence in quantum information processing.
Pan, Fei; Qiu, Liang; Liu, Zhi
2017-03-08
We establish two complementarity relations for the relative entropy of coherence in quantum information processing, i.e., quantum dense coding and teleportation. We first give an uncertainty-like expression relating local quantum coherence to the capacity of optimal dense coding for bipartite system. The relation can also be applied to the case of dense coding by using unital memoryless noisy quantum channels. Further, the relation between local quantum coherence and teleportation fidelity for two-qubit system is given.
The complementarity relations of quantum coherence in quantum information processing
NASA Astrophysics Data System (ADS)
Pan, Fei; Qiu, Liang; Liu, Zhi
2017-03-01
We establish two complementarity relations for the relative entropy of coherence in quantum information processing, i.e., quantum dense coding and teleportation. We first give an uncertainty-like expression relating local quantum coherence to the capacity of optimal dense coding for bipartite system. The relation can also be applied to the case of dense coding by using unital memoryless noisy quantum channels. Further, the relation between local quantum coherence and teleportation fidelity for two-qubit system is given.
Sexual complementarity between host humoral toxicity and soldier caste in a polyembryonic wasp
Uka, Daisuke; Sakamoto, Takuma; Yoshimura, Jin; Iwabuchi, Kikuo
2016-01-01
Defense against enemies is a type of natural selection considered fundamentally equivalent between the sexes. In reality, however, whether males and females differ in defense strategy is unknown. Multiparasitism necessarily leads to the problem of defense for a parasite (parasitoid). The polyembryonic parasitic wasp Copidosoma floridanum is famous for its larval soldiers’ ability to kill other parasites. This wasp also exhibits sexual differences not only with regard to the competitive ability of the soldier caste but also with regard to host immune enhancement. Female soldiers are more aggressive than male soldiers, and their numbers increase upon invasion of the host by other parasites. In this report, in vivo and in vitro competition assays were used to test whether females have a toxic humoral factor; if so, then its strength was compared with that of males. We found that females have a toxic factor that is much weaker than that of males. Our results imply sexual complementarity between host humoral toxicity and larval soldiers. We discuss how this sexual complementarity guarantees adaptive advantages for both males and females despite the one-sided killing of male reproductives by larval female soldiers in a mixed-sex brood. PMID:27385149
Sexual complementarity between host humoral toxicity and soldier caste in a polyembryonic wasp.
Uka, Daisuke; Sakamoto, Takuma; Yoshimura, Jin; Iwabuchi, Kikuo
2016-07-07
Defense against enemies is a type of natural selection considered fundamentally equivalent between the sexes. In reality, however, whether males and females differ in defense strategy is unknown. Multiparasitism necessarily leads to the problem of defense for a parasite (parasitoid). The polyembryonic parasitic wasp Copidosoma floridanum is famous for its larval soldiers' ability to kill other parasites. This wasp also exhibits sexual differences not only with regard to the competitive ability of the soldier caste but also with regard to host immune enhancement. Female soldiers are more aggressive than male soldiers, and their numbers increase upon invasion of the host by other parasites. In this report, in vivo and in vitro competition assays were used to test whether females have a toxic humoral factor; if so, then its strength was compared with that of males. We found that females have a toxic factor that is much weaker than that of males. Our results imply sexual complementarity between host humoral toxicity and larval soldiers. We discuss how this sexual complementarity guarantees adaptive advantages for both males and females despite the one-sided killing of male reproductives by larval female soldiers in a mixed-sex brood.
Benefits of integrating complementarity into priority threat management.
Chadés, Iadine; Nicol, Sam; van Leeuwen, Stephen; Walters, Belinda; Firn, Jennifer; Reeson, Andrew; Martin, Tara G; Carwardine, Josie
2015-04-01
Conservation decision tools based on cost-effectiveness analysis are used to assess threat management strategies for improving species persistence. These approaches rank alternative strategies by their benefit to cost ratio but may fail to identify the optimal sets of strategies to implement under limited budgets because they do not account for redundancies. We devised a multiobjective optimization approach in which the complementarity principle is applied to identify the sets of threat management strategies that protect the most species for any budget. We used our approach to prioritize threat management strategies for 53 species of conservation concern in the Pilbara, Australia. We followed a structured elicitation approach to collect information on the benefits and costs of implementing 17 different conservation strategies during a 3-day workshop with 49 stakeholders and experts in the biodiversity, conservation, and management of the Pilbara. We compared the performance of our complementarity priority threat management approach with a current cost-effectiveness ranking approach. A complementary set of 3 strategies: domestic herbivore management, fire management and research, and sanctuaries provided all species with >50% chance of persistence for $4.7 million/year over 20 years. Achieving the same result cost almost twice as much ($9.71 million/year) when strategies were selected by their cost-effectiveness ranks alone. Our results show that complementarity of management benefits has the potential to double the impact of priority threat management approaches. © 2014 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Hawthorne, Bryant; Panchal, Jitesh H.
2014-07-01
A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.
Neural network for solving Nash equilibrium problem in application of multiuser power control.
He, Xing; Yu, Junzhi; Huang, Tingwen; Li, Chuandong; Li, Chaojie
2014-09-01
In this paper, based on an equivalent mixed linear complementarity problem, we propose a neural network to solve multiuser power control optimization problems (MPCOP), which is modeled as the noncooperative Nash game in modern digital subscriber line (DSL). If the channel crosstalk coefficients matrix is positive semidefinite, it is shown that the proposed neural network is stable in the sense of Lyapunov and global convergence to a Nash equilibrium, and the Nash equilibrium is unique if the channel crosstalk coefficients matrix is positive definite. Finally, simulation results on two numerical examples show the effectiveness and performance of the proposed neural network.
On reductibility of degenerate optimization problems to regular operator equations
NASA Astrophysics Data System (ADS)
Bednarczuk, E. M.; Tretyakov, A. A.
2016-12-01
We present an application of the p-regularity theory to the analysis of non-regular (irregular, degenerate) nonlinear optimization problems. The p-regularity theory, also known as the p-factor analysis of nonlinear mappings, was developed during last thirty years. The p-factor analysis is based on the construction of the p-factor operator which allows us to analyze optimization problems in the degenerate case. We investigate reducibility of a non-regular optimization problem to a regular system of equations which do not depend on the objective function. As an illustration we consider applications of our results to non-regular complementarity problems of mathematical programming and to linear programming problems.
Saptio-temporal complementarity of wind and solar power in India
NASA Astrophysics Data System (ADS)
Lolla, Savita; Baidya Roy, Somnath; Chowdhury, Sourangshu
2015-04-01
Wind and solar power are likely to be a part of the solution to the climate change problem. That is why they feature prominently in the energy policies of all industrial economies including India. One of the major hindrances that is preventing an explosive growth of wind and solar energy is the issue of intermittency. This is a major problem because in a rapidly moving economy, energy production must match the patterns of energy demand. Moreover, sudden increase and decrease in energy supply may destabilize the power grids leading to disruptions in power supply. In this work we explore if the patterns of variability in wind and solar energy availability can offset each other so that a constant supply can be guaranteed. As a first step, this work focuses on seasonal-scale variability for each of the 5 regional power transmission grids in India. Communication within each grid is better than communication between grids. Hence, it is assumed that the grids can switch sources relatively easily. Wind and solar resources are estimated using the MERRA Reanalysis data for the 1979-2013 period. Solar resources are calculated with a 20% conversion efficiency. Wind resources are estimated using a 2 MW turbine power curve. Total resources are obtained by optimizing location and number of wind/solar energy farms. Preliminary results show that the southern and western grids are more appropriate for cogeneration than the other grids. Many studies on wind-solar cogeneration have focused on temporal complementarity at local scale. However, this is one of the first studies to explore spatial complementarity over regional scales. This project may help accelerate renewable energy penetration in India by identifying regional grid(s) where the renewable energy intermittency problem can be minimized.
ERIC Educational Resources Information Center
Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice
2015-01-01
The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…
Meyer, J C; Needham, D J
2015-03-08
In this paper, we examine a semi-linear parabolic Cauchy problem with non-Lipschitz nonlinearity which arises as a generic form in a significant number of applications. Specifically, we obtain a well-posedness result and examine the qualitative structure of the solution in detail. The standard classical approach to establishing well-posedness is precluded owing to the lack of Lipschitz continuity for the nonlinearity. Here, existence and uniqueness of solutions is established via the recently developed generic approach to this class of problem (Meyer & Needham 2015 The Cauchy problem for non-Lipschitz semi-linear parabolic partial differential equations. London Mathematical Society Lecture Note Series, vol. 419) which examines the difference of the maximal and minimal solutions to the problem. From this uniqueness result, the approach of Meyer & Needham allows for development of a comparison result which is then used to exhibit global continuous dependence of solutions to the problem on a suitable initial dataset. The comparison and continuous dependence results obtained here are novel to this class of problem. This class of problem arises specifically in the study of a one-step autocatalytic reaction, which is schematically given by A→B at rate a(p)b(q) (where a and b are the concentrations of A and B, respectively, with 0
problem has been lacking up to the present.
ERIC Educational Resources Information Center
Strickland, Tricia K.; Maccini, Paula
2013-01-01
We examined the effects of the Concrete-Representational-Abstract Integration strategy on the ability of secondary students with learning disabilities to multiply linear algebraic expressions embedded within contextualized area problems. A multiple-probe design across three participants was used. Results indicated that the integration of the…
NASA Astrophysics Data System (ADS)
Trifonov, E. V.
2017-07-01
We propose a procedure for multiplying solutions of linear and nonlinear one-dimensional wave equations, where the speed of sound can be an arbitrary function of one variable. We obtain exact solutions. We show that the functional series comprising these solutions can be used to solve initial boundary value problems. For this, we introduce a special scalar product.
ERIC Educational Resources Information Center
Strickland, Tricia K.; Maccini, Paula
2013-01-01
We examined the effects of the Concrete-Representational-Abstract Integration strategy on the ability of secondary students with learning disabilities to multiply linear algebraic expressions embedded within contextualized area problems. A multiple-probe design across three participants was used. Results indicated that the integration of the…
NASA Technical Reports Server (NTRS)
Utku, S.
1969-01-01
A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.
Further Development in the Global Resolution of Convex Programs with Complementarity Constraints
2014-04-09
discuss various methods to tighten the relaxation by exploiting complementarity, with the aim of constructing better approximations to the convex hull of...AFRL-OSR-VA-TR-2014-0126 Global Resolution of Convex Programs with Complementarity Constraints Angelia Nedich UNIVERSITY OF ILLINOIS Final Report 04...Development in the Global Resolution of Convex Programs with Complementarity Constraints 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
1993-01-01
The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.
Reinforcement learning in complementarity game and population dynamics.
Jost, Jürgen; Li, Wei
2014-02-01
We systematically test and compare different reinforcement learning schemes in a complementarity game [J. Jost and W. Li, Physica A 345, 245 (2005)] played between members of two populations. More precisely, we study the Roth-Erev, Bush-Mosteller, and SoftMax reinforcement learning schemes. A modified version of Roth-Erev with a power exponent of 1.5, as opposed to 1 in the standard version, performs best. We also compare these reinforcement learning strategies with evolutionary schemes. This gives insight into aspects like the issue of quick adaptation as opposed to systematic exploration or the role of learning rates.
Induced Coherence, Vacuum Fields, and Complementarity in Biphoton Generation
NASA Astrophysics Data System (ADS)
Heuer, A.; Menzel, R.; Milonni, P. W.
2015-02-01
It is well established that spontaneous parametric down-conversion with induced coherence across two coupled interferometers results in high-visibility single-photon interference. We describe experiments in which additional photon channels are introduced such that "which-path" information is made possible and the fringe visibility in single-photon interference is reduced in accordance with basic notions of complementarity. However, these additional pathways result in nearly perfect visibility when photons are counted in coincidence. A simplified theoretical model accounts for these observations and attributes them directly to the vacuum fields at the different crystals.
Complementarity of the maldacena and randall-sundrum pictures
Duff; Liu
2000-09-04
We revive an old result, that one-loop corrections to the graviton propagator induce 1/r(3) corrections to the Newtonian gravitational potential, and compute the coefficient due to closed loops of the U(N) N = 4 super-Yang-Mills theory that arises in Maldacena's anti-de Sitter conformal field theory correspondence. We find exact agreement with the coefficient appearing in the Randall-Sundrum brane-world proposal. This provides more evidence for the complementarity of the two pictures.
NASA Astrophysics Data System (ADS)
Tanemura, M.; Chida, Y.
2016-09-01
There are a lot of design problems of control system which are expressed as a performance index minimization under BMI conditions. However, a minimization problem expressed as LMIs can be easily solved because of the convex property of LMIs. Therefore, many researchers have been studying transforming a variety of control design problems into convex minimization problems expressed as LMIs. This paper proposes an LMI method for a quadratic performance index minimization problem with a class of BMI conditions. The minimization problem treated in this paper includes design problems of state-feedback gain for switched system and so on. The effectiveness of the proposed method is verified through a state-feedback gain design for switched systems and a numerical simulation using the designed feedback gains.
NASA Technical Reports Server (NTRS)
Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.
1973-01-01
High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.
NASA Astrophysics Data System (ADS)
Hadrava, M.; Feistauer, M.; Horáček, J.; Kosík, A.
2013-10-01
The paper is concerned with the numerical solution of static and dynamic elasticity problems. The purpose of this subject is the computation of the so-called ALE mapping (representing the mesh deformation) in the solution of flow in time-dependent domains and the computation of the time-dependent deformation of an elastic body. These two problems represent important ingredients in the fluid-structure interaction (FSI). They are discretized by the discontinuous Galerkin method (DGM). Here we describe the method and present some test problems. The developed method is applied to the FSI problem treated in [2].
Indivisibility, Complementarity and Ontology: A Bohrian Interpretation of Quantum Mechanics
NASA Astrophysics Data System (ADS)
Roldán-Charria, Jairo
2014-12-01
The interpretation of quantum mechanics presented in this paper is inspired by two ideas that are fundamental in Bohr's writings: indivisibility and complementarity. Further basic assumptions of the proposed interpretation are completeness, universality and conceptual economy. In the interpretation, decoherence plays a fundamental role for the understanding of measurement. A general and precise conception of complementarity is proposed. It is fundamental in this interpretation to make a distinction between ontological reality, constituted by everything that does not depend at all on the collectivity of human beings, nor on their decisions or limitations, nor on their existence, and empirical reality constituted by everything that not being ontological is, however, intersubjective. According to the proposed interpretation, neither the dynamical properties, nor the constitutive properties of microsystems like mass, charge and spin, are ontological. The properties of macroscopic systems and space-time are also considered to belong to empirical reality. The acceptance of the above mentioned conclusion does not imply a total rejection of the notion of ontological reality. In the paper, utilizing the Aristotelian ideas of general cause and potentiality, a relation between ontological reality and empirical reality is proposed. Some glimpses of ontological reality, in the form of what can be said about it, are finally presented.
Autoantigen complementarity and its contributions to hallmarks of autoimmune disease.
Pendergraft, William F; Badhwar, Anshul K; Preston, Gloria A
2015-06-21
The question considered is, "What causes the autoimmune response to begin and what causes it to worsen into autoimmune disease?" The theory of autoantigen complementarity posits that the initiating immunogen causing disease is a protein complementary (antisense) to the self-antigen, rather than a response to the native protein. The resulting primary antibody elicits an anti-antibody response or anti-idiotype, consequently producing a disease-inciting autoantibody. Yet, not everyone who developes self-reactive autoantibodies will manifest autoimmune disease. What is apparent is that manifestation of disease is governed by the acquisition of multiple immune-compromising traits that increase susceptibility and drive disease. Taking into account current cellular, molecular, and genetic information, six traits, or 'hallmarks', of autoimmune disease were proposed: (1) Autoreactive cells evade deletion, (2) Presence of asymptomatic autoantibodies, (3) Hyperactivity of Fc-FcR pathway, (4) Susceptibility to environmental impact, (5) Antigenic modifications of self-proteins, (6) Microbial Infections. Presented here is a discussion on how components delineated in the theory of autoantigen complementarity potentially promote the acquisition of multiple 'hallmarks' of disease.
ERIC Educational Resources Information Center
Mills, James W.; And Others
1973-01-01
The Study reported here tested an application of the Linear Programming Model at the Reading Clinic of Drew University. Results, while not conclusive, indicate that this approach yields greater gains in speed scores than a traditional approach for this population. (Author)
NASA Astrophysics Data System (ADS)
Lozhnikov, D. A.
2012-03-01
S. Yu. Dobrokhotov, B. Tirozzi, S. Ya. Sekerzh-Zenkovich, A. I. Shafarevich, and their co-authors suggested new effective asymptotic formulas for solving a Cauchy problem with localized initial data for multidimensional linear hyperbolic equations with variable coefficients and, in particular, for a linearized system of shallow-water equations over an uneven bottom in their cycle of papers. The solutions are localized in a neighborhood of fronts on which focal points and self-intersection points (singular points) occur in the course of time, due to the variability of the coefficients. In the present paper, a numerical realization of asymptotic formulas in a neighborhood of singular points of fronts is presented in the case of the system of shallow-water equations, gluing problems for these formulas together with formulas for regular domains are discussed, and also a comparison of asymptotic solutions with solutions obtained by immediate numerical computations is carried out.
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
ERIC Educational Resources Information Center
Laird, Heather; Vande Kemp, Hendrika
1987-01-01
Explored the level of family therapist complementarity in the early, middle and late stages of therapy performing a micro-analysis of Salvador Minuchin with one family in successful therapy. Level of therapist complementarity was signficantly greater in the early and late stages than in the middle stage, and was significantly correlated with…
Interpersonal Complementarity in the Mental Health Intake: A Mixed-Methods Study
ERIC Educational Resources Information Center
Rosen, Daniel C.; Miller, Alisa B.; Nakash, Ora; Halperin, Lucila; Alegria, Margarita
2012-01-01
The study examined which socio-demographic differences between clients and providers influenced interpersonal complementarity during an initial intake session; that is, behaviors that facilitate harmonious interactions between client and provider. Complementarity was assessed using blinded ratings of 114 videotaped intake sessions by trained…
Interpersonal Complementarity in the Mental Health Intake: A Mixed-Methods Study
ERIC Educational Resources Information Center
Rosen, Daniel C.; Miller, Alisa B.; Nakash, Ora; Halperin, Lucila; Alegria, Margarita
2012-01-01
The study examined which socio-demographic differences between clients and providers influenced interpersonal complementarity during an initial intake session; that is, behaviors that facilitate harmonious interactions between client and provider. Complementarity was assessed using blinded ratings of 114 videotaped intake sessions by trained…
ERIC Educational Resources Information Center
Rothe, J. Peter
This article focuses on the linkage between the quantitative and qualitative distance education research methods. The concept that serves as the conceptual link is termed "complementarity." The definition of complementarity emerges through a simulated study of FernUniversitat's mentors. The study shows that in the case of the mentors,…
NASA Astrophysics Data System (ADS)
Goloviznin, V. M.; Karabasov, S. A.; Kozubskaya, T. K.; Maksimov, N. V.
2009-12-01
A generalization of the CABARET finite difference scheme is proposed for linearized one-dimensional Euler equations based on the characteristic decomposition into local Riemann invariants. The new method is compared with several central finite difference schemes that are widely used in computational aeroacoustics. Numerical results for the propagation of an acoustic wave in a homogeneous field and the refraction of this wave through a contact discontinuity obtained on a strongly nonuniform grid are presented.
NASA Astrophysics Data System (ADS)
Turkin, Alexander; van Oijen, Antoine M.; Turkin, Anatoliy A.
2015-11-01
One-dimensional sliding along DNA as a means to accelerate protein target search is a well-known phenomenon occurring in various biological systems. Using a biomimetic approach, we have recently demonstrated the practical use of DNA-sliding peptides to speed up bimolecular reactions more than an order of magnitude by allowing the reactants to associate not only in the solution by three-dimensional (3D) diffusion, but also on DNA via one-dimensional (1D) diffusion [A. Turkin et al., Chem. Sci. (2015), 10.1039/C5SC03063C]. Here we present a mean-field kinetic model of a bimolecular reaction in a solution with linear extended sinks (e.g., DNA) that can intermittently trap molecules present in a solution. The model consists of chemical rate equations for mean concentrations of reacting species. Our model demonstrates that addition of linear traps to the solution can significantly accelerate reactant association. We show that at optimum concentrations of linear traps the 1D reaction pathway dominates in the kinetics of the bimolecular reaction; i.e., these 1D traps function as an assembly line of the reaction product. Moreover, we show that the association reaction on linear sinks between trapped reactants exhibits a nonclassical third-order behavior. Predictions of the model agree well with our experimental observations. Our model provides a general description of bimolecular reactions that are controlled by a combined 3D+1D mechanism and can be used to quantitatively describe both naturally occurring as well as biomimetic biochemical systems that reduce the dimensionality of search.
Complementarity of information and the emergence of the classical world
NASA Astrophysics Data System (ADS)
Zwolak, Michael; Zurek, Wojciech
2013-03-01
We prove an anti-symmetry property relating accessible information about a system through some auxiliary system F and the quantum discord with respect to a complementary system F'. In Quantum Darwinism, where fragments of the environment relay information to observers - this relation allows us to understand some fundamental properties regarding correlations between a quantum system and its environment. First, it relies on a natural separation of accessible information and quantum information about a system. Under decoherence, this separation shows that accessible information is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. Second, It shows that objective information becomes accessible to many observers only when quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. The resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality, and supports Bohr's intuition that quantum phenomena acquire classical reality only when communicated.
Complementarity of quantum discord and classically accessible information
Zwolak, Michael; Zurek, Wojciech H.
2013-01-01
The sum of the Holevo quantity (that bounds the capacity of quantum channels to transmit classical information about an observable) and the quantum discord (a measure of the quantumness of correlations of that observable) yields an observable-independent total given by the quantum mutual information. This split naturally delineates information about quantum systems accessible to observers – information that is redundantly transmitted by the environment – while showing that it is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. We also prove an anti-symmetry property relating accessible information and discord. It shows that information becomes objective – accessible to many observers – only as quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. The resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality while flagrantly quantum superpositions are out of reach.
Bayesian Inference for Duplication–Mutation with Complementarity Network Models
Persing, Adam; Beskos, Alexandros; Heine, Kari; De Iorio, Maria
2015-01-01
Abstract We observe an undirected graph G without multiple edges and self-loops, which is to represent a protein–protein interaction (PPI) network. We assume that G evolved under the duplication–mutation with complementarity (DMC) model from a seed graph, G0, and we also observe the binary forest Γ that represents the duplication history of G. A posterior density for the DMC model parameters is established, and we outline a sampling strategy by which one can perform Bayesian inference; that sampling strategy employs a particle marginal Metropolis–Hastings (PMMH) algorithm. We test our methodology on numerical examples to demonstrate a high accuracy and precision in the inference of the DMC model's mutation and homodimerization parameters. PMID:26355682
Interference and complementarity for two-photon hybrid entangled states
Nogueira, W. A. T.; Santibanez, M.; Delgado, A.; Saavedra, C.; Neves, L.; Lima, G.; Padua, S.
2010-10-15
In this work we generate two-photon hybrid entangled states (HESs), where the polarization of one photon is entangled with the transverse spatial degree of freedom of the second photon. The photon pair is created by parametric down-conversion in a polarization-entangled state. A birefringent double-slit couples the polarization and spatial degrees of freedom of these photons, and finally, suitable spatial and polarization projections generate the HES. We investigate some interesting aspects of the two-photon hybrid interference and present this study in the context of the complementarity relation that exists between the visibility of the one-photon and that of the two-photon interference patterns.
Complementarity of Neutrinoless Double Beta Decay and Cosmology
Dodelson, Scott; Lykken, Joseph
2014-03-20
Neutrinoless double beta decay experiments constrain one combination of neutrino parameters, while cosmic surveys constrain another. This complementarity opens up an exciting range of possibilities. If neutrinos are Majorana particles, and the neutrino masses follow an inverted hierarchy, then the upcoming sets of both experiments will detect signals. The combined constraints will pin down not only the neutrino masses but also constrain one of the Majorana phases. If the hierarchy is normal, then a beta decay detection with the upcoming generation of experiments is unlikely, but cosmic surveys could constrain the sum of the masses to be relatively heavy, thereby producing a lower bound for the neutrinoless double beta decay rate, and therefore an argument for a next generation beta decay experiment. In this case as well, a combination of the phases will be constrained.
Complementarity of quantum discord and classically accessible information
Zwolak, Michael P.; Zurek, Wojciech H.
2013-05-20
The sum of the Holevo quantity (that bounds the capacity of quantum channels to transmit classical information about an observable) and the quantum discord (a measure of the quantumness of correlations of that observable) yields an observable-independent total given by the quantum mutual information. This split naturally delineates information about quantum systems accessible to observers – information that is redundantly transmitted by the environment – while showing that it is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. In addition, we prove an anti-symmetry property relating accessible information and discord. It shows that information becomes objective – accessible to many observers – only as quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. Lastly, the resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality while flagrantly quantum superpositions are out of reach.
Complementarity of quantum discord and classically accessible information
Zwolak, Michael P.; Zurek, Wojciech H.
2013-05-20
The sum of the Holevo quantity (that bounds the capacity of quantum channels to transmit classical information about an observable) and the quantum discord (a measure of the quantumness of correlations of that observable) yields an observable-independent total given by the quantum mutual information. This split naturally delineates information about quantum systems accessible to observers – information that is redundantly transmitted by the environment – while showing that it is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. In addition, we prove an anti-symmetry property relating accessible information and discord. Itmore » shows that information becomes objective – accessible to many observers – only as quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. Lastly, the resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality while flagrantly quantum superpositions are out of reach.« less
Complementarity in Spontaneous Emission: Quantum Jumps, Staggers and Slides
NASA Astrophysics Data System (ADS)
Wiseman, H.
Dan Walls is rightly famous for his part in many of the outstanding developments in quantum optics in the last 30 years. Two of these are most relevant to this paper. The first is the prediction of nonclassical properties of the fluorescence of a two-level atom, such as antibunching [1] and squeezing [2]. Both of these predictions have now been verified experimentally [3,4]. The second is the investigation of fundamental issues such as complementarity and the uncertainty principle [5,6]. This latter area is one which has generated a lively theoretical discussion [7], and, more importantly, suggested new experiments [8]. It was also an area in which I had the honour of working with Dan [9], and of gaining the benefit of his instinct for picking a fruitful line of investigation.
Phenomenology and the life sciences: Clarifications and complementarities.
Sheets-Johnstone, Maxine
2015-12-01
This paper first clarifies phenomenology in ways essential to demonstrating its basic concern with Nature and its recognition of individual and cultural differences as well as commonalities. It furthermore clarifies phenomenological methodology in ways essential to understanding the methodology itself, its purpose, and its consequences. These clarifications show how phenomenology, by hewing to the dynamic realities of life itself and experiences of life itself, counters reductive thinking and "embodiments" of one kind and another. On the basis of these clarifications, the paper then turns to detailing conceptual complementarities between phenomenology and the life sciences, particularly highlighting studies in coordination dynamics. In doing so, it brings to light fundamental relationships such as those between mind and motion and between intrinsic dynamics and primal animation. It furthermore highlights the common concern with origins in both phenomenology and evolutionary biology: the history of how what is present is related to its inception in the past and to its transformations from past to present.
Reinforcement learning in complementarity game and population dynamics
NASA Astrophysics Data System (ADS)
Jost, Jürgen; Li, Wei
2014-02-01
We systematically test and compare different reinforcement learning schemes in a complementarity game [J. Jost and W. Li, Physica A 345, 245 (2005), 10.1016/j.physa.2004.07.005] played between members of two populations. More precisely, we study the Roth-Erev, Bush-Mosteller, and SoftMax reinforcement learning schemes. A modified version of Roth-Erev with a power exponent of 1.5, as opposed to 1 in the standard version, performs best. We also compare these reinforcement learning strategies with evolutionary schemes. This gives insight into aspects like the issue of quick adaptation as opposed to systematic exploration or the role of learning rates.
Complementarity of genotoxic and nongenotoxic predictors of rodent carcinogenicity.
Kitchin, K T; Brown, J L; Kulkarni, A P
1994-01-01
Twenty-one chemicals carcinogenic in rodent bioassays were selected for study. The chemicals were administered by gavage in two dose levels to female Sprague-Dawley rats. The effects of these 21 chemicals on four biochemical assays [hepatic DNA damage by alkaline elution (DD), hepatic ornithine decarboxylase activity (ODC), serum alanine aminotransferase activity (ALT), and hepatic cytochrome P-450 content (P450)] were determined. Available data from seven cancer predictors published by others [the Ames test (AMES), mutation in Salmonella typhimurium TA 1537 (TA 1537), structural alerts (SA), mutation in mouse lymphoma cells (MOLY), chromosomal aberrations in Chinese hamster ovary cells (ABS), sister chromatid exchange in hamster ovary cells (SCE), and the ke test (ke)] were also compiled for these 21 chemical carcinogens plus 28 carcinogens and 62 noncarcinogens already published by our laboratory. From the resulting 111 (chemicals) by 11 (individual cancer predictors) data matrix, the five operational characteristics (sensitivity, specificity, positive predictivity, negative predictivity, and concordance) of each of the 11 individual cancer predictors (four biochemical parameters of this study and seven cancer predictors of others) are presented. Two examples of complementarity or synergy of composite cancer predictors were found. To obtain maximum concordance it was necessary to combine both genotoxic and nongenotoxic cancer predictors. The composite cancer predictor (DD or [ODC and P450] or [ODC and ALT]) had higher concordance than did any of the four individual cancer predictors from which it was constructed. Similarly, the composite cancer predictor (TA 1537 or DD or [ODC and P450] or [ODC and ALT]) had higher concordance than any of its five individual constituent cancer predictors. Complementarity or synergy has been demonstrated both 1) among genotoxic cancer predictors (DD and TA 1537) and 2) between nongenotoxic (ODC, P450, and ALT) and genotoxic cancer
NASA Astrophysics Data System (ADS)
Tanaka, Hidefumi; Yamamoto, Yuhji
2016-05-01
Palaeointensity experiments were carried out to a sample collection from two sections of basalt lava flow sequences of Pliocene age in north central Iceland (Chron C2An) to further refine the knowledge of the behaviour of the palaeomagnetic field. Selection of samples was mainly based on their stability of remanence to thermal demagnetization as well as good reversibility in variations of magnetic susceptibility and saturation magnetization with temperature, which would indicate the presence of magnetite as a product of deuteric oxidation of titanomagnetite. Among 167 lava flows from two sections, 44 flows were selected for the Königsberger-Thellier-Thellier experiment in vacuum. In spite of careful pre-selection of samples, an Arai plot with two linear segments, or a concave-up appearance, was often encountered during the experiments. This non-ideal behaviour was probably caused by an irreversible change in the domain state of the magnetic grains of the pseudo-single-domain (PSD) range. This is assumed because an ideal linear plot was obtained in the second run of the palaeointensity experiment in which a laboratory thermoremanence acquired after the final step of the first run was used as a natural remanence. This experiment was conducted on six selected samples, and no clear difference between the magnetic grains of the experimented and pristine sister samples was found by scanning electron microscope and hysteresis measurements, that is, no occurrence of notable chemical/mineralogical alteration, suggesting that no change in the grain size distribution had occurred. Hence, the two-segment Arai plot was not caused by the reversible multidomain/PSD effect in which the curvature of the Arai plot is dependent on the grain size. Considering that the irreversible change in domain state must have affected data points at not only high temperatures but also low temperatures, fv ≥ 0.5 was adopted as one of the acceptance criteria where fv is a vectorially defined
NASA Astrophysics Data System (ADS)
Cai, Mingchao; Pavarino, Luca F.
2016-10-01
The goal of this work is to construct and study hybrid and multiplicative two-level overlapping Schwarz algorithms with standard coarse spaces for the almost incompressible linear elasticity and Stokes systems, discretized by mixed finite and spectral element methods with discontinuous pressures. Two different approaches are considered to solve the resulting saddle point systems: a) a preconditioned conjugate gradient (PCG) method applied to the symmetric positive definite reformulation of the almost incompressible linear elasticity system obtained by eliminating the pressure unknowns; b) a GMRES method with indefinite overlapping Schwarz preconditioner applied directly to the saddle point formulation of both the elasticity and Stokes systems. Condition number estimates and convergence properties of the proposed hybrid and multiplicative overlapping Schwarz algorithms are proven for the positive definite reformulation of almost incompressible elasticity. These results are based on our previous study [8] where only additive Schwarz preconditioners were considered for almost incompressible elasticity. Extensive numerical experiments with both finite and spectral elements show that the proposed overlapping Schwarz preconditioners are scalable, quasi-optimal in the number of unknowns across individual subdomains and robust with respect to discontinuities of the material parameters across subdomains interfaces. The results indicate that the proposed preconditioners retain a good performance also when the quasi-monotonicity assumption, required by the available theory, does not hold.
NASA Technical Reports Server (NTRS)
Wong, P. K.
1975-01-01
The closely-related problems of designing reliable feedback stabilization strategy and coordinating decentralized feedbacks are considered. Two approaches are taken. A geometric characterization of the structure of control interaction (and its dual) was first attempted and a concept of structural homomorphism developed based on the idea of 'similarity' of interaction pattern. The idea of finding classes of individual feedback maps that do not 'interfere' with the stabilizing action of each other was developed by identifying the structural properties of nondestabilizing and LQ-optimal feedback maps. Some known stability properties of LQ-feedback were generalized and some partial solutions were provided to the reliable stabilization and decentralized feedback coordination problems. A concept of coordination parametrization was introduced, and a scheme for classifying different modes of decentralization (information, control law computation, on-line control implementation) in control systems was developed.
1980-05-31
Multiconstraint Zero - One Knapsack Problem ," The Journal of the Operational Research Society, Vol. 30, 1979, pp. 369-378. 69 [41] Kepler, C...programming. Shih [401 has written on a branch and bound method , Kepler and Blackman [41] have demonstrated the use of dynamic programming in the selection of...Portfolio Selection Model," IEEE A. Transactions on Engineering Management, Vol. EM-26, No. 1, 1979, pp. 2-7. [40] Shih, Wei, "A Branch and
NASA Astrophysics Data System (ADS)
Updike, Clark A.; Greeley, Scott W.; King, James A.
1998-10-01
In the process of designing a control actuator for a vibration cancellation system demonstration on a large, precision optical testbed, it was discovered that the support struts on which the control actuators attach could not be disassembled. This led to the development of a Linear Precision ACTuator (LPACT) with a novel two piece design which could be clamped around the strut in-situ. The design requirements, LPACT characteristics, and LPACT test results are fully described and contrasted with other earlier LPACT designs. Cancellation system performance results are presented for a 3 tone disturbance case. Excellent results, on the order of 40 dB of attenuation per tone (down to the noise floor on two disturbances), are achieved using an Adaptive Neural Controller (ANC).
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas
2016-11-01
Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
NASA Astrophysics Data System (ADS)
Kahnert, Michael
2006-09-01
Explicit symmetry relations for the Green's function subject to homogeneous boundary conditions are derived for arbitrary linear differential or integral equation problems in which the boundary surface has a set of symmetry elements. For corresponding homogeneous problems subject to inhomogeneous boundary conditions implicit symmetry relations involving the Green's function are obtained. The usefulness of these symmetry relations is illustrated by means of a recently developed self-consistent Green's function formalism of electromagnetic and acoustic scattering problems applied to the exterior scattering problem. One obtains explicit symmetry relations for the volume Green's function, the surface Green's function, and the interaction operator, and the respective symmetry relations are shown to be equivalent. This allows us to treat boundary symmetries of volume-integral equation methods, boundary-integral equation methods, and the T matrix formulation of acoustic and electromagnetic scattering under a common theoretical framework. By specifying a specific expansion basis the coordinate-free symmetry relations of, e.g., the surface Green's function can be brought into the form of explicit symmetry relations of its expansion coefficient matrix. For the specific choice of radiating spherical wave functions the approach is illustrated by deriving unitary reducible representations of non-cubic finite point groups in this basis, and by deriving the corresponding explicit symmetry relations of the coefficient matrix. The reducible representations can be reduced by group-theoretical techniques, thus bringing the coefficient matrix into block-diagonal form, which can greatly reduce ill-conditioning problems in numerical applications.
NASA Technical Reports Server (NTRS)
Bensoussan, A.; Delfour, M. C.; Mitter, S. K.
1976-01-01
Available published results are surveyed for a special class of infinite-dimensional control systems whose evolution is characterized by a semigroup of operators of class C subscript zero. Emphasis is placed on an approach that clarifies the system-theoretic relationship among controllability, stabilizability, stability, and the existence of a solution to an associated operator equation of the Riccati type. Formulation of the optimal control problem is reviewed along with the asymptotic behavior of solutions to a general system of equations and several theorems concerning L2 stability. Examples are briefly discussed which involve second-order parabolic systems, first-order hyperbolic systems, and distributed boundary control.
Granja, C; Almada-Lobo, B; Janela, F; Seabra, J; Mendes, A
2014-12-01
As patient's length of stay in waiting lists increases, governments are looking for strategies to control the problem. Agreements were created with private providers to diminish the workload in the public sector. However, the growth of the private sector is not following the demand for care. Given this context, new management strategies have to be considered in order to minimize patient length of stay in waiting lists while reducing the costs and increasing (or at least maintaining) the quality of care. Appointment scheduling systems are today known to be proficient in the optimization of health care services. Their utilization is focused on increasing the usage of human resources, medical equipment and reducing the patient waiting times. In this paper, a simulation-based optimization approach to the Patient Admission Scheduling Problem is presented. Modeling tools and simulation techniques are used in the optimization of a diagnostic imaging department. The proposed techniques have demonstrated to be effective in the evaluation of diagnostic imaging workflows. A simulated annealing algorithm was used to optimize the patient admission sequence towards minimizing the total completion and total waiting of patients. The obtained results showed average reductions of 5% on the total completion and 38% on the patients' total waiting time. Copyright © 2014 Elsevier Inc. All rights reserved.
Cameron, M.K.; Fomel, S.B.; Sethian, J.A.
2009-01-01
In the present work we derive and study a nonlinear elliptic PDE coming from the problem of estimation of sound speed inside the Earth. The physical setting of the PDE allows us to pose only a Cauchy problem, and hence is ill-posed. However we are still able to solve it numerically on a long enough time interval to be of practical use. We used two approaches. The first approach is a finite difference time-marching numerical scheme inspired by the Lax-Friedrichs method. The key features of this scheme is the Lax-Friedrichs averaging and the wide stencil in space. The second approach is a spectral Chebyshev method with truncated series. We show that our schemes work because of (1) the special input corresponding to a positive finite seismic velocity, (2) special initial conditions corresponding to the image rays, (3) the fact that our finite-difference scheme contains small error terms which damp the high harmonics; truncation of the Chebyshev series, and (4) the need to compute the solution only for a short interval of time. We test our numerical scheme on a collection of analytic examples and demonstrate a dramatic improvement in accuracy in the estimation of the sound speed inside the Earth in comparison with the conventional Dix inversion. Our test on the Marmousi example confirms the effectiveness of the proposed approach.
Parra, Gilbert R; Smith, Gail L; Mason, W Alex; Savolainen, Jukka; Chmelka, Mary B; Miettunen, Jouko; Järvelin, Marjo-Riitta
2017-10-01
This study tested whether there are linear or nonlinear relations between prenatal/birth cumulative risk and psychosocial outcomes during adolescence. Participants (n = 6963) were taken from the Northern Finland Birth Cohort Study 1986. The majority of participants did not experience any contextual risk factors around the time of the target child's birth (58.1%). Even in this low-risk sample, cumulative contextual risk assessed around the time of birth was related to seven different psychosocial outcomes 16 years later. There was some evidence for nonlinear effects, but only for substance-related outcomes; however, the form of the association depended on how the cumulative risk index was calculated. Gender did not moderate the relation between cumulative risk and any of the adolescent psychosocial outcomes. Results highlight the potential value of using the cumulative risk framework for identifying children at birth who are at risk for a range of poor psychosocial outcomes during adolescence. Copyright © 2017 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, De-Han; Hofmann, Bernd; Zou, Jun
2017-01-01
We consider the ill-posed operator equation Ax = y with an injective and bounded linear operator A mapping between {{\\ell}2} and a Hilbert space Y, possessing the unique solution {{x}\\dagger}=≤ft\\{{{x}\\dagger}k\\right\\}k=1∞ . For the cases that sparsity {{x}\\dagger}\\in {{\\ell}0} is expected but often slightly violated in practice, we investigate in comparison with the {{\\ell}1} -regularization the elastic-net regularization, where the penalty is a weighted superposition of the {{\\ell}1} -norm and the {{\\ell}2} -norm square, under the assumption that {{x}\\dagger}\\in {{\\ell}1} . There occur two positive parameters in this approach, the weight parameter η and the regularization parameter as the multiplier of the whole penalty in the Tikhonov functional, whereas only one regularization parameter arises in {{\\ell}1} -regularization. Based on the variational inequality approach for the description of the solution smoothness with respect to the forward operator A and exploiting the method of approximate source conditions, we present some results to estimate the rate of convergence for the elastic-net regularization. The occurring rate function contains the rate of the decay {{x}\\dagger}k\\to 0 for k\\to ∞ and the classical smoothness properties of {{x}\\dagger} as an element in {{\\ell}2} .
Hauck, Cory D; Alldredge, Graham; Tits, Andre
2012-01-01
We present a numerical algorithm to implement entropy-based (M{sub N}) moment models in the context of a simple, linear kinetic equation for particles moving through a material slab. The closure for these models - as is the case for all entropy-based models - is derived through the solution of constrained, convex optimization problem. The algorithm has two components. The first component is a discretization of the moment equations which preserves the set of realizable moments, thereby ensuring that the optimization problem has a solution (in exact arithmetic). The discretization is a second-order kinetic scheme which uses MUSCL-type limiting in space and a strong-stability-preserving, Runge-Kutta time integrator. The second component of the algorithm is a Newton-based solver for the dual optimization problem, which uses an adaptive quadrature to evaluate integrals in the dual objective and its derivatives. The accuracy of the numerical solution to the dual problem plays a key role in the time step restriction for the kinetic scheme. We study in detail the difficulties in the dual problem that arise near the boundary of realizable moments, where quadrature formulas are less reliable and the Hessian of the dual objection function is highly ill-conditioned. Extensive numerical experiments are performed to illustrate these difficulties. In cases where the dual problem becomes 'too difficult' to solve numerically, we propose a regularization technique to artificially move moments away from the realizable boundary in a way that still preserves local particle concentrations. We present results of numerical simulations for two challenging test problems in order to quantify the characteristics of the optimization solver and to investigate when and how frequently the regularization is needed.
Complementarity of dark matter searches in the phenomenological MSSM
Cahill-Rowley, Matthew; Cotta, Randy; Drlica-Wagner, Alex; Funk, Stefan; Hewett, JoAnne; Ismail, Ahmed; Rizzo, Tom; Wood, Matthew
2015-03-11
As is well known, the search for and eventual identification of dark matter in supersymmetry requires a simultaneous, multipronged approach with important roles played by the LHC as well as both direct and indirect dark matter detection experiments. We examine the capabilities of these approaches in the 19-parameter phenomenological MSSM which provides a general framework for complementarity studies of neutralino dark matter. We summarize the sensitivity of dark matter searches at the 7 and 8 (and eventually 14) TeV LHC, combined with those by Fermi, CTA, IceCube/DeepCore, COUPP, LZ and XENON. The strengths and weaknesses of each of these techniques are examined and contrasted and their interdependent roles in covering the model parameter space are discussed in detail. We find that these approaches explore orthogonal territory and that advances in each are necessary to cover the supersymmetric weakly interacting massive particle parameter space. We also find that different experiments have widely varying sensitivities to the various dark matter annihilation mechanisms, some of which would be completely excluded by null results from these experiments.
Complementarity in biphoton generation with stimulated or induced coherence
NASA Astrophysics Data System (ADS)
Heuer, A.; Menzel, R.; Milonni, P. W.
2015-09-01
Coherence can be induced or stimulated in parametric down-conversion using two or three crystals when, for example, the idler modes of the crystals are aligned. Previous experiments with induced coherence [Phys. Rev. Lett. 114, 053601 (2015), 10.1103/PhysRevLett.114.053601] focused on which-path information and the role of vacuum fields in realizing complementarity via reduced visibility in single-photon interference. Here we describe experiments comparing induced and stimulated coherence. Different single-photon interference experiments were performed by blocking one of the pump beams in a three-crystal setup. Each counted photon is emitted from one of two crystals and which-way information may or not be available, depending on the setup. Distinctly different results are obtained in the induced and stimulated cases, especially when a variable transmission filter is inserted between the crystals. A simplified theoretical model accounts for all the experimental results and is also used to address the question of whether the phases of the signal and idler fields in parametric down-conversion are correlated.
Complementarity of XRFS and LIBS for corrosion studies.
Pérez-Serradilla, J A; Jurado-López, A; Luque de Castro, M D
2007-01-15
A study of ancient coins with different corrosion degrees and the same or different composition has been carried out by using energy-dispersive X-ray fluorescence spectrometry (XRFS) and laser-induced break-down spectroscopy (LIBS). The results obtained show the complementarity of both techniques: XRFS provides information about the superficial composition which is used for the assignation of atomic lines in LIBS, and this provides in-depth and tomographic information. Thus, some very superficial impurities such as Ag, Cl, Au, Sr and Sb are only detected by XRFS, while highly corroded coins of iron-based alloy provided no iron signal by XRFS but increased concentration of this element up to constant composition by LIBS by increasing the shot number. The average of the same laser-shot number for all sampling positions of a sampling zone produces a significant improvement of the signal-to-noise ratio (SNR) in the detriment of punctual information as that obtained by single-position kinetic series.
Rapid Online Analysis of Local Feature Detectors and Their Complementarity
Ehsan, Shoaib; Clark, Adrian F.; McDonald-Maier, Klaus D.
2013-01-01
A vision system that can assess its own performance and take appropriate actions online to maximize its effectiveness would be a step towards achieving the long-cherished goal of imitating humans. This paper proposes a method for performing an online performance analysis of local feature detectors, the primary stage of many practical vision systems. It advocates the spatial distribution of local image features as a good performance indicator and presents a metric that can be calculated rapidly, concurs with human visual assessments and is complementary to existing offline measures such as repeatability. The metric is shown to provide a measure of complementarity for combinations of detectors, correctly reflecting the underlying principles of individual detectors. Qualitative results on well-established datasets for several state-of-the-art detectors are presented based on the proposed measure. Using a hypothesis testing approach and a newly-acquired, larger image database, statistically-significant performance differences are identified. Different detector pairs and triplets are examined quantitatively and the results provide a useful guideline for combining detectors in applications that require a reasonable spatial distribution of image features. A principled framework for combining feature detectors in these applications is also presented. Timing results reveal the potential of the metric for online applications. PMID:23966187
Complementarity and Area-Efficiency in the Prioritization of the Global Protected Area Network
Kullberg, Peter; Toivonen, Tuuli; Montesino Pouzols, Federico; Lehtomäki, Joona; Di Minin, Enrico; Moilanen, Atte
2015-01-01
Complementarity and cost-efficiency are widely used principles for protected area network design. Despite the wide use and robust theoretical underpinnings, their effects on the performance and patterns of priority areas are rarely studied in detail. Here we compare two approaches for identifying the management priority areas inside the global protected area network: 1) a scoring-based approach, used in recently published analysis and 2) a spatial prioritization method, which accounts for complementarity and area-efficiency. Using the same IUCN species distribution data the complementarity method found an equal-area set of priority areas with double the mean species ranges covered compared to the scoring-based approach. The complementarity set also had 72% more species with full ranges covered, and lacked any coverage only for half of the species compared to the scoring approach. Protected areas in our complementarity-based solution were on average smaller and geographically more scattered. The large difference between the two solutions highlights the need for critical thinking about the selected prioritization method. According to our analysis, accounting for complementarity and area-efficiency can lead to considerable improvements when setting management priorities for the global protected area network. PMID:26678497
NASA Astrophysics Data System (ADS)
Giovannacci, D.; Detalle, V.; Martos-Levif, D.; Ogien, J.; Bernikola, E.; Tornari, V.; Hatzigiannakis, K.; Mouhoubi, K.; Bodnar, J.-L.; Walker, G.-C.; Brissaud, D.; Trichereau, B.; Jackson, B.; Bowen, J.
2015-06-01
The abbey's church of Chaalis, in the North of Paris, was founded by Louis VI as a Cistercian monastery on 10th January 1137. In 2013, in the frame the European Commission's 7th Framework Program project CHARISMA [grant agreement no. 228330] the chapel was used as a practical case-study for application of the work done in a task devoted to best practices in historical buildings and monuments. In the chapel, three areas were identified as relevant. The first area was used to make an exercise on diagnosis of the different deterioration patterns. The second area was used to analyze a restored area. The third one was selected to test some hypotheses on the possibility of using the portable instruments to answer some questions related to the deterioration problems. To inspect this area, different tools were used: -Visible fluorescence under UV, - THz system, - Stimulated Infra-Red Thermography, SIRT - Digital Holographic Speckle Pattern Interferometry, DHSPI - Condition report by conservator-restorer. The complementarity and synergy offered by the profitable use of the different integrated tools is clearly shown in this practical exercise.
A Physicist's Quest in Biology: Max Delbrück and "Complementarity".
Strauss, Bernard S
2017-06-01
Max Delbrück was trained as a physicist but made his major contribution in biology and ultimately shared a Nobel Prize in Physiology or Medicine. He was the acknowledged leader of the founders of molecular biology, yet he failed to achieve his key scientific goals. His ultimate scientific aim was to find evidence for physical laws unique to biology: so-called "complementarity." He never did. The specific problem he initially wanted to solve was the nature of biological replication but the discovery of the mechanism of replication was made by others, in large part because of his disdain for the details of biochemistry. His later career was spent investigating the effect of light on the fungus Phycomyces, a topic that turned out to be of limited general interest. He was known both for his informality but also for his legendary displays of devastating criticism. His life and that of some of his closest colleagues was acted out against a background of a world in conflict. This essay describes the man and his career and searches for an explanation of his profound influence. Copyright © 2017 by the Genetics Society of America.
Accounting for complementarity to maximize monitoring power for species management.
Tulloch, Ayesha I T; Chadès, Iadine; Possingham, Hugh P
2013-10-01
To choose among conservation actions that may benefit many species, managers need to monitor the consequences of those actions. Decisions about which species to monitor from a suite of different species being managed are hindered by natural variability in populations and uncertainty in several factors: the ability of the monitoring to detect a change, the likelihood of the management action being successful for a species, and how representative species are of one another. However, the literature provides little guidance about how to account for these uncertainties when deciding which species to monitor to determine whether the management actions are delivering outcomes. We devised an approach that applies decision science and selects the best complementary suite of species to monitor to meet specific conservation objectives. We created an index for indicator selection that accounts for the likelihood of successfully detecting a real trend due to a management action and whether that signal provides information about other species. We illustrated the benefit of our approach by analyzing a monitoring program for invasive predator management aimed at recovering 14 native Australian mammals of conservation concern. Our method selected the species that provided more monitoring power at lower cost relative to the current strategy and traditional approaches that consider only a subset of the important considerations. Our benefit function accounted for natural variability in species growth rates, uncertainty in the responses of species to the prescribed action, and how well species represent others. Monitoring programs that ignore uncertainty, likelihood of detecting change, and complementarity between species will be more costly and less efficient and may waste funding that could otherwise be used for management. © 2013 Society for Conservation Biology.
Poulton, Terry; Ellaway, Rachel H; Round, Jonathan; Jivram, Trupti; Kavia, Sheetal; Hilton, Sean
2014-11-05
Problem-based learning (PBL) is well established in medical education and beyond, and continues to be developed and explored. Challenges include how to connect the somewhat abstract nature of classroom-based PBL with clinical practice and how to maintain learner engagement in the process of PBL over time. A study was conducted to investigate the efficacy of decision-PBL (D-PBL), a variant form of PBL that replaces linear PBL cases with virtual patients. These Web-based interactive cases provided learners with a series of patient management pathways. Learners were encouraged to consider and discuss courses of action, take their chosen management pathway, and experience the consequences of their decisions. A Web-based application was essential to allow scenarios to respond dynamically to learners' decisions, to deliver the scenarios to multiple PBL classrooms in the same timeframe, and to record centrally the paths taken by the PBL groups. A randomized controlled trial in crossover design was run involving all learners (N=81) in the second year of the graduate entry stream for the undergraduate medicine program at St George's University of London. Learners were randomized to study groups; half engaged in a D-PBL activity whereas the other half had a traditional linear PBL activity on the same subject material. Groups alternated D-PBL and linear PBL over the semester. The measure was mean cohort performance on specific face-to-face exam questions at the end of the semester. D-PBL groups performed better than linear PBL groups on questions related to D-PBL with the difference being statistically significant for all questions. Differences between the exam performances of the 2 groups were not statistically significant for the questions not related to D-PBL. The effect sizes for D-PBL-related questions were large and positive (>0.6) except for 1 question that showed a medium positive effect size. The effect sizes for questions not related to D-PBL were all small (≤0
2014-01-01
Background Problem-based learning (PBL) is well established in medical education and beyond, and continues to be developed and explored. Challenges include how to connect the somewhat abstract nature of classroom-based PBL with clinical practice and how to maintain learner engagement in the process of PBL over time. Objective A study was conducted to investigate the efficacy of decision-PBL (D-PBL), a variant form of PBL that replaces linear PBL cases with virtual patients. These Web-based interactive cases provided learners with a series of patient management pathways. Learners were encouraged to consider and discuss courses of action, take their chosen management pathway, and experience the consequences of their decisions. A Web-based application was essential to allow scenarios to respond dynamically to learners’ decisions, to deliver the scenarios to multiple PBL classrooms in the same timeframe, and to record centrally the paths taken by the PBL groups. Methods A randomized controlled trial in crossover design was run involving all learners (N=81) in the second year of the graduate entry stream for the undergraduate medicine program at St George’s University of London. Learners were randomized to study groups; half engaged in a D-PBL activity whereas the other half had a traditional linear PBL activity on the same subject material. Groups alternated D-PBL and linear PBL over the semester. The measure was mean cohort performance on specific face-to-face exam questions at the end of the semester. Results D-PBL groups performed better than linear PBL groups on questions related to D-PBL with the difference being statistically significant for all questions. Differences between the exam performances of the 2 groups were not statistically significant for the questions not related to D-PBL. The effect sizes for D-PBL–related questions were large and positive (>0.6) except for 1 question that showed a medium positive effect size. The effect sizes for questions
Bee diversity effects on pollination depend on functional complementarity and niche shifts.
Fründ, Jochen; Dormann, Carsten F; Holzschuh, Andrea; Tscharntke, Teja
2013-09-01
Biodiversity is important for many ecosystem processes. Global declines in pollinator diversity and abundance have been recognized, raising concerns about a pollination crisis of crops and wild plants. However, experimental evidence for effects of pollinator species diversity on plant reproduction is extremely scarce. We established communities with 1-5 bee species to test how seed production of a plant community is determined by bee diversity. Higher bee diversity resulted in higher seed production, but the strongest difference was observed for one compared to more than one bee species. Functional complementarity among bee species had a far higher explanatory power than bee diversity, suggesting that additional bee species only benefit pollination when they increase coverage of functional niches. In our experiment, complementarity was driven by differences in flower and temperature preferences. Interspecific interactions among bee species contributed to realized functional complementarity, as bees reduced interspecific overlap by shifting to alternative flowers in the presence of other species. This increased the number of plant species visited by a bee community and demonstrates a new mechanism for a biodiversity-function relationship ("interactive complementarity"). In conclusion, our results highlight both the importance of bee functional diversity for the reproduction of plant communities and the need to identify complementarity traits for accurately predicting pollination services by different bee communities.
Altman, Michael D; Bardhan, Jaydeep P; White, Jacob K; Tidor, Bruce
2009-01-15
We present a boundary-element method (BEM) implementation for accurately solving problems in biomolecular electrostatics using the linearized Poisson-Boltzmann equation. Motivating this implementation is the desire to create a solver capable of precisely describing the geometries and topologies prevalent in continuum models of biological molecules. This implementation is enabled by the synthesis of four technologies developed or implemented specifically for this work. First, molecular and accessible surfaces used to describe dielectric and ion-exclusion boundaries were discretized with curved boundary elements that faithfully reproduce molecular geometries. Second, we avoided explicitly forming the dense BEM matrices and instead solved the linear systems with a preconditioned iterative method (GMRES), using a matrix compression algorithm (FFTSVD) to accelerate matrix-vector multiplication. Third, robust numerical integration methods were employed to accurately evaluate singular and near-singular integrals over the curved boundary elements. Fourth, we present a general boundary-integral approach capable of modeling an arbitrary number of embedded homogeneous dielectric regions with differing dielectric constants, possible salt treatment, and point charges. A comparison of the presented BEM implementation and standard finite-difference techniques demonstrates that for certain classes of electrostatic calculations, such as determining absolute electrostatic solvation and rigid-binding free energies, the improved convergence properties of the BEM approach can have a significant impact on computed energetics. We also demonstrate that the improved accuracy offered by the curved-element BEM is important when more sophisticated techniques, such as nonrigid-binding models, are used to compute the relative electrostatic effects of molecular modifications. In addition, we show that electrostatic calculations requiring multiple solves using the same molecular geometry, such as
Self-complementarity within proteins: bridging the gap between binding and folding.
Basu, Sankar; Bhattacharyya, Dhananjay; Banerjee, Rahul
2012-06-06
Complementarity, in terms of both shape and electrostatic potential, has been quantitatively estimated at protein-protein interfaces and used extensively to predict the specific geometry of association between interacting proteins. In this work, we attempted to place both binding and folding on a common conceptual platform based on complementarity. To that end, we estimated (for the first time to our knowledge) electrostatic complementarity (Em) for residues buried within proteins. Em measures the correlation of surface electrostatic potential at protein interiors. The results show fairly uniform and significant values for all amino acids. Interestingly, hydrophobic side chains also attain appreciable complementarity primarily due to the trajectory of the main chain. Previous work from our laboratory characterized the surface (or shape) complementarity (Sm) of interior residues, and both of these measures have now been combined to derive two scoring functions to identify the native fold amid a set of decoys. These scoring functions are somewhat similar to functions that discriminate among multiple solutions in a protein-protein docking exercise. The performances of both of these functions on state-of-the-art databases were comparable if not better than most currently available scoring functions. Thus, analogously to interfacial residues of protein chains associated (docked) with specific geometry, amino acids found in the native interior have to satisfy fairly stringent constraints in terms of both Sm and Em. The functions were also found to be useful for correctly identifying the same fold for two sequences with low sequence identity. Finally, inspired by the Ramachandran plot, we developed a plot of Sm versus Em (referred to as the complementarity plot) that identifies residues with suboptimal packing and electrostatics which appear to be correlated to coordinate errors.
A methodology to quantify and optimize time complementarity between hydropower and solar PV systems
NASA Astrophysics Data System (ADS)
Kougias, Ioannis; Szabó, Sándor; Monforti-Ferrario, Fabio; Huld, Thomas; Bódis, Katalin
2016-04-01
Hydropower and solar energy are expected to play a major role in achieving renewable energy sources' (RES) penetration targets. However, the integration of RES in the energy mix needs to overcome the technical challenges that are related to grid's operation. Therefore, there is an increasing need to explore approaches where different RES will operate under a synergetic approach. Ideally, hydropower and solar PV systems can be jointly developed in such systems where their electricity output profiles complement each other as much as possible and minimize the need for reserve capacities and storage costs. A straightforward way to achieve that is by optimizing the complementarity among RES systems both over time and spatially. The present research developed a methodology that quantifies the degree of time complementarity between small-scale hydropower stations and solar PV systems and examines ways to increase it. The methodology analyses high-resolution spatial and temporal data for solar radiation obtained from the existing PVGIS model (available online at: http://re.jrc.ec.europa.eu/pvgis/) and associates it with hydrological information of water inflows to a hydropower station. It builds on an exhaustive optimization algorithm that tests possible alterations of the PV system installation (azimuth, tilt) aiming to increase the complementarity, with minor compromises in the total solar energy output. The methodology has been tested in several case studies and the results indicated variations among regions and different hydraulic regimes. In some cases a small compromise in the solar energy output showed significant increases of the complementarity, while in other cases the effect is not that strong. Our contribution aims to present these findings in detail and initiate a discussion on the role and gains of increased complementarity between solar and hydropower energies. Reference: Kougias I, Szabó S, Monforti-Ferrario F, Huld T, Bódis K (2016). A methodology for
Wiedemann, H.
1981-11-01
Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.
NASA Astrophysics Data System (ADS)
Carles, M.; Torres-Espallardo, I.; Alberich-Bayarri, A.; Olivas, C.; Bello, P.; Nestle, U.; Martí-Bonmatí, L.
2017-01-01
A major source of error in quantitative PET/CT scans of lung cancer tumors is respiratory motion. Regarding the variability of PET texture features (TF), the impact of respiratory motion has not been properly studied with experimental phantoms. The primary aim of this work was to evaluate the current use of PET texture analysis for heterogeneity characterization in lesions affected by respiratory motion. Twenty-eight heterogeneous lesions were simulated by a mixture of alginate and 18 F-fluoro-2-deoxy-D-glucose (FDG). Sixteen respiratory patterns were applied. Firstly, the TF response for different heterogeneous phantoms and its robustness with respect to the segmentation method were calculated. Secondly, the variability for TF derived from PET image with (gated, G-) and without (ungated, U-) motion compensation was analyzed. Finally, TF complementarity was assessed. In the comparison of TF derived from the ideal contour with respect to TF derived from 40%-threshold and adaptive-threshold PET contours, 7/8 TF showed strong linear correlation (LC) (p < 0.001, r > 0.75), despite a significant volume underestimation. Independence of lesion movement (LC in 100% of the combined pairs of movements, p < 0.05) was obtained for 1/8 TF with U-image (width of the volume-activity histogram, WH) and 4/8 TF with G-image (WH and energy (ENG), local-homogeneity (LH) and entropy (ENT), derived from the co-ocurrence matrix). Their variability in terms of the coefficient of variance ({{C}\\text{V}} ) resulted in {{C}\\text{V}} (WH) = 0.18 on the U-image and {{C}\\text{V}} (WH) = 0.24, {{C}\\text{V}} (ENG) = 0.15, {{C}\\text{V}} (LH) = 0.07 and {{C}\\text{V}} (ENT) = 0.06 on the G-image. Apart from WH (r > 0.9, p < 0.001), not one of these TF has shown LC with C max. Complementarity was observed for the TF pairs: ENG-LH, CONT (contrast)-ENT and LH-ENT. In conclusion, the effect of
Delayed-choice test of quantum complementarity with interfering single photons.
Jacques, Vincent; Wu, E; Grosshans, Frédéric; Treussart, François; Grangier, Philippe; Aspect, Alain; Roch, Jean-François
2008-06-06
We report an experimental test of quantum complementarity with single-photon pulses sent into a Mach-Zehnder interferometer with an output beam splitter of adjustable reflection coefficient R. In addition, the experiment is realized in Wheeler's delayed-choice regime. Each randomly set value of R allows us to observe interference with visibility V and to obtain incomplete which-path information characterized by the distinguishability parameter D. Measured values of V and D are found to fulfill the complementarity relation V2+D2 < or =1.
Experimental investigation of halogen-bond hard-soft acid-base complementarity.
Riel, Asia Marie S; Jessop, Morly J; Decato, Daniel A; Massena, Casey J; Nascimento, Vinicius R; Berryman, Orion B
2017-04-01
The halogen bond (XB) is a topical noncovalent interaction of rapidly increasing importance. The XB employs a `soft' donor atom in comparison to the `hard' proton of the hydrogen bond (HB). This difference has led to the hypothesis that XBs can form more favorable interactions with `soft' bases than HBs. While computational studies have supported this suggestion, solution and solid-state data are lacking. Here, XB soft-soft complementarity is investigated with a bidentate receptor that shows similar associations with neutral carbonyls and heavy chalcogen analogs. The solution speciation and XB soft-soft complementarity is supported by four crystal structures containing neutral and anionic soft Lewis bases.
NASA Astrophysics Data System (ADS)
Portela, César; Afonso, Carlos M. M.; Pinto, Madalena M. M.; João Ramos, Maria
2003-09-01
One of the most important pharmacological mechanisms of antimalarial action is the inhibition of the aggregation of hematin into hemozoin. We present a group of new potential antimalarial molecules for which we have performed a DFT study of their stereoelectronic properties. Additionally, the same calculations were carried out for the two putative drug receptors involved in the referred activity, i.e., hematin μ-oxo dimer and hemozoin. A complementarity between the structural and electronic profiles of the planned molecules and the receptors can be observed. A docking study of the new compounds in relation to the two putative receptors is also presented, providing a correlation with the defined electrostatic complementarity.
Metabolic Complementarity and Genomics of the Dual Bacterial Symbiosis of Sharpshooters
Wu, Dongying; Daugherty, Sean C; Van Aken, Susan E; Pai, Grace H; Watkins, Kisha L; Khouri, Hoda; Tallon, Luke J; Zaborsky, Jennifer M; Dunbar, Helen E; Tran, Phat L; Moran, Nancy A
2006-01-01
Mutualistic intracellular symbiosis between bacteria and insects is a widespread phenomenon that has contributed to the global success of insects. The symbionts, by provisioning nutrients lacking from diets, allow various insects to occupy or dominate ecological niches that might otherwise be unavailable. One such insect is the glassy-winged sharpshooter (Homalodisca coagulata), which feeds on xylem fluid, a diet exceptionally poor in organic nutrients. Phylogenetic studies based on rRNA have shown two types of bacterial symbionts to be coevolving with sharpshooters: the gamma-proteobacterium Baumannia cicadellinicola and the Bacteroidetes species Sulcia muelleri. We report here the sequencing and analysis of the 686,192–base pair genome of B. cicadellinicola and approximately 150 kilobase pairs of the small genome of S. muelleri, both isolated from H. coagulata. Our study, which to our knowledge is the first genomic analysis of an obligate symbiosis involving multiple partners, suggests striking complementarity in the biosynthetic capabilities of the two symbionts: B. cicadellinicola devotes a substantial portion of its genome to the biosynthesis of vitamins and cofactors required by animals and lacks most amino acid biosynthetic pathways, whereas S. muelleri apparently produces most or all of the essential amino acids needed by its host. This finding, along with other results of our genome analysis, suggests the existence of metabolic codependency among the two unrelated endosymbionts and their insect host. This dual symbiosis provides a model case for studying correlated genome evolution and genome reduction involving multiple organisms in an intimate, obligate mutualistic relationship. In addition, our analysis provides insight for the first time into the differences in symbionts between insects (e.g., aphids) that feed on phloem versus those like H. coagulata that feed on xylem. Finally, the genomes of these two symbionts provide potential targets for
Georgatzis, Konstantinos; Lal, Partha; Hawthorne, Christopher; Shaw, Martin; Piper, Ian; Tarbert, Claire; Donald, Rob; Williams, Christopher K I
2016-01-01
High-resolution, artefact-free and accurately annotated physiological data are desirable in patients with brain injury both to inform clinical decision-making and for intelligent analysis of the data in applications such as predictive modelling. We have quantified the quality of annotation surrounding artefactual events and propose a factorial switching linear dynamical systems (FSLDS) approach to automatically detect artefact in physiological data collected in the neurological intensive care unit (NICU). Retrospective analysis of the BrainIT data set to discover potential hypotensive events corrupted by artefact and identify the annotation of associated clinical interventions. Training of an FSLDS model on clinician-annotated artefactual events in five patients with severe traumatic brain injury. In a subset of 187 patients in the BrainIT database, 26.5 % of potential hypotensive events were abandoned because of artefactual data. Only 30 % of these episodes could be attributed to an annotated clinical intervention. As assessed by the area under the receiver operating characteristic curve metric, FSLDS model performance in automatically identifying the events of blood sampling, arterial line damping and patient handling was 0.978, 0.987 and 0.765, respectively. The influence of artefact on physiological data collected in the NICU is a significant problem. This pilot study using an FSLDS approach shows real promise and is under further development.
Cundiff, Jenny M; Smith, Timothy W; Butner, Jonathan; Critchfield, Kenneth L; Nealey-Moore, Jill
2015-01-01
The principle of complementarity in interpersonal theory states that an actor's behavior tends to "pull, elicit, invite, or evoke" responses from interaction partners who are similar in affiliation (i.e., warmth vs. hostility) and opposite in control (i.e., dominance vs. submissiveness). Furthermore, complementary interactions are proposed to evoke less negative affect and promote greater relationship satisfaction. These predictions were examined in two studies of married couples. Results suggest that complementarity in affiliation describes a robust general pattern of marital interaction, but complementarity in control varies across contexts. Consistent with behavioral models of marital interaction, greater levels of affiliation and lower control by partners-not complementarity in affiliation or control-were associated with less anger and anxiety and greater relationship quality. Partners' levels of affiliation and control combined in ways other than complementarity-mostly additively, but sometimes synergistically-to predict negative affect and relationship satisfaction.
NASA Astrophysics Data System (ADS)
Marçais, J.; De Dreuzy, J. R.; Erhel, J.
2016-12-01
Landscape structure and geological heterogeneities are major controls on subsurface flow dynamic. Particularly it has strong impact on saturated areas (potential hotspots) emergence, promoting seepage production. This control leads to highly non linear flow response where, for each hillslope location, two possible states can be distinguished: with or without seepage. Different algorithmic solutions have been proposed to model this process. A solution is to solve subsurface flow for an assumed water table position and then iterate on the water table position until convergence is met. Seepage areas and seepage values are then deduced from the locations where water table intersects surface. A second way to proceed is to explicitly couple groundwater equations and surface water equations with an exchange flux. Here we developed a novel approach using the complementarity framework to reconcile in a single system the two states potentially encountered (with or without seepage). The complementarity framework manages the current state and the possible transition between states thanks to a specifically devoted equation. This framework is applied to the 1D hillslope storage Boussinesq equations (Troch et al. 2003). Reformulating this complementarity system differently enables to partition effectively the local flux balance between storage variation and seepage. This differential algebraic equations (DAEs) system has the major benefit to be directly solvable with built-in ode libraries. Finally the system is regularized to enhance fast and efficient solving. This model is stable with fast spatial convergence. It respects the mass balance locally beyond the tolerance limits and shows limited sensitivity to the value of the regularization parameter. The model appears to be robust, able to solve complex realistic case with presence of landscape heterogeneity and real hydrologic forcing. This model will then be used as a physical basis to implement biogeochemistry reactivity.
ERIC Educational Resources Information Center
O'Toole, John; Dunn, Julie
2008-01-01
This article reports the findings of a research project that saw researchers from interaction design and drama education come together with a group of eleven and twelve year olds to investigate the current and future complementarity of computers and live classroom drama. The project was part of a pilot feasibility study commissioned by the…
Complementarity as a Program Evaluation Strategy: A Focus on Qualitative and Quantitative Methods.
ERIC Educational Resources Information Center
Lafleur, Clay
Use of complementarity as a deliberate and necessary program evaluation strategy is discussed. Quantitative and qualitative approaches are viewed as complementary and can be integrated into a single study. The synergy that results from using complementary methods in a single study seems to enhance understanding and interpretation. A review of the…
Is the firewall consistent? Gedanken experiments on black hole complementarity and firewall proposal
Hwang, Dong-il; Lee, Bum-Hoon; Yeom, Dong-han E-mail: bhl@sogang.ac.kr
2013-01-01
In this paper, we discuss the black hole complementarity and the firewall proposal at length. Black hole complementarity is inevitable if we assume the following five things: unitarity, entropy-area formula, existence of an information observer, semi-classical quantum field theory for an asymptotic observer, and the general relativity for an in-falling observer. However, large N rescaling and the AMPS argument show that black hole complementarity is inconsistent. To salvage the basic philosophy of the black hole complementarity, AMPS introduced a firewall around the horizon. According to large N rescaling, the firewall should be located close to the apparent horizon. We investigate the consistency of the firewall with the two critical conditions: the firewall should be near the time-like apparent horizon and it should not affect the future infinity. Concerning this, we have introduced a gravitational collapse with a false vacuum lump which can generate a spacetime structure with disconnected apparent horizons. This reveals a situation that there is a firewall outside of the event horizon, while the apparent horizon is absent. Therefore, the firewall, if it exists, not only does modify the general relativity for an in-falling observer, but also modify the semi-classical quantum field theory for an asymptotic observer.
Climate change mitigation and adaptation in the land use sector: from complementarity to synergy.
Duguma, Lalisa A; Minang, Peter A; van Noordwijk, Meine
2014-09-01
Currently, mitigation and adaptation measures are handled separately, due to differences in priorities for the measures and segregated planning and implementation policies at international and national levels. There is a growing argument that synergistic approaches to adaptation and mitigation could bring substantial benefits at multiple scales in the land use sector. Nonetheless, efforts to implement synergies between adaptation and mitigation measures are rare due to the weak conceptual framing of the approach and constraining policy issues. In this paper, we explore the attributes of synergy and the necessary enabling conditions and discuss, as an example, experience with the Ngitili system in Tanzania that serves both adaptation and mitigation functions. An in-depth look into the current practices suggests that more emphasis is laid on complementarity-i.e., mitigation projects providing adaptation co-benefits and vice versa rather than on synergy. Unlike complementarity, synergy should emphasize functionally sustainable landscape systems in which adaptation and mitigation are optimized as part of multiple functions. We argue that the current practice of seeking co-benefits (complementarity) is a necessary but insufficient step toward addressing synergy. Moving forward from complementarity will require a paradigm shift from current compartmentalization between mitigation and adaptation to systems thinking at landscape scale. However, enabling policy, institutional, and investment conditions need to be developed at global, national, and local levels to achieve synergistic goals.
Revisiting the quark-lepton complementarity and triminimal parametrization of neutrino mixing matrix
Kang, Sin Kyu
2011-05-01
We examine how a parametrization of neutrino mixing matrix reflecting quark-lepton complementarity can be probed by considering phase-averaged oscillation probabilities, flavor composition of neutrino fluxes coming from atmospheric and astrophysical neutrinos and lepton flavor violating radiative decays. We discuss some distinct features of the parametrization by comparing the triminimal parametrization of perturbations to the tribimaximal neutrino mixing matrix.
Hernandez, Pauline; Picon-Cochard, Catherine
2016-01-01
Legume species promote productivity and increase the digestibility of herbage in grasslands. Considerable experimental data also indicate that communities with legumes produce more above-ground biomass than is expected from monocultures. While it has been attributed to N facilitation, evidence to identify the mechanisms involved is still lacking and the role of complementarity in soil water acquisition by vertical root differentiation remains unclear. We used a 20-months mesocosm experiment to investigate the effects of species richness (single species, two- and five-species mixtures) and functional diversity (presence of the legume Trifolium repens) on a set of traits related to light, N and water use and measured at community level. We found a positive effect of Trifolium presence and abundance on biomass production and complementarity effects in the two-species mixtures from the second year. In addition the community traits related to water and N acquisition and use (leaf area, N, water-use efficiency, and deep root growth) were higher in the presence of Trifolium. With a multiple regression approach, we showed that the traits related to water acquisition and use were with N the main determinants of biomass production and complementarity effects in diverse mixtures. At shallow soil layers, lower root mass of Trifolium and higher soil moisture should increase soil water availability for the associated grass species. Conversely at deep soil layer, higher root growth and lower soil moisture mirror soil resource use increase of mixtures. Altogether, these results highlight N facilitation but almost soil vertical differentiation and thus complementarity for water acquisition and use in mixtures with Trifolium. Contrary to grass-Trifolium mixtures, no significant over-yielding was measured for grass mixtures even those having complementary traits (short and shallow vs. tall and deep). Thus, vertical complementarity for soil resources uptake in mixtures was not only
Hernandez, Pauline; Picon-Cochard, Catherine
2016-01-01
Legume species promote productivity and increase the digestibility of herbage in grasslands. Considerable experimental data also indicate that communities with legumes produce more above-ground biomass than is expected from monocultures. While it has been attributed to N facilitation, evidence to identify the mechanisms involved is still lacking and the role of complementarity in soil water acquisition by vertical root differentiation remains unclear. We used a 20-months mesocosm experiment to investigate the effects of species richness (single species, two- and five-species mixtures) and functional diversity (presence of the legume Trifolium repens) on a set of traits related to light, N and water use and measured at community level. We found a positive effect of Trifolium presence and abundance on biomass production and complementarity effects in the two-species mixtures from the second year. In addition the community traits related to water and N acquisition and use (leaf area, N, water-use efficiency, and deep root growth) were higher in the presence of Trifolium. With a multiple regression approach, we showed that the traits related to water acquisition and use were with N the main determinants of biomass production and complementarity effects in diverse mixtures. At shallow soil layers, lower root mass of Trifolium and higher soil moisture should increase soil water availability for the associated grass species. Conversely at deep soil layer, higher root growth and lower soil moisture mirror soil resource use increase of mixtures. Altogether, these results highlight N facilitation but almost soil vertical differentiation and thus complementarity for water acquisition and use in mixtures with Trifolium. Contrary to grass-Trifolium mixtures, no significant over-yielding was measured for grass mixtures even those having complementary traits (short and shallow vs. tall and deep). Thus, vertical complementarity for soil resources uptake in mixtures was not only
Variationally consistent discretization schemes and numerical algorithms for contact problems
NASA Astrophysics Data System (ADS)
Wohlmuth, Barbara
We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of
Hydro-elastic complementarity in black branes at large D
NASA Astrophysics Data System (ADS)
Emparan, Roberto; Izumi, Keisuke; Luna, Raimon; Suzuki, Ryotaku; Tanabe, Kentaro
2016-06-01
We obtain the effective theory for the non-linear dynamics of black branes — both neutral and charged, in asymptotically flat or Anti-deSitter spacetimes — to leading order in the inverse-dimensional expansion. We find that black branes evolve as viscous fluids, but when they settle down they are more naturally viewed as solutions of an elastic soap-bubble theory. The two views are complementary: the same variable is regarded in one case as the energy density of the fluid, in the other as the deformation of the elastic membrane. The large- D theory captures finite-wavelength phenomena beyond the conventional reach of hydrodynamics. For asymptotically flat charged black branes (either Reissner-Nordstrom or p-brane-charged black branes) it yields the non-linear evolution of the Gregory-Laflamme instability at large D and its endpoint at stable non-uniform black branes. For Reissner-Nordstrom AdS black branes we find that sound perturbations do not propagate (have purely imaginary frequency) when their wavelength is below a certain charge-dependent value. We also study the polarization of black branes induced by an external electric field.
Makarov, V A; Petnikova, V M; Shuvalov, V V
2013-10-31
We have analysed self-similar solutions to the propagation problem of a slit beam with a plane wavefront in a linear medium and in a photorefractive crystal with diffusion nonlinearity. It is shown that in the latter case, despite the presence of the nonlinear term in the wave equation, the linear superposition principle holds true for the solutions of this class due to saturation. At the same time, the mirror symmetry violation of the wave equation for the transverse coordinate in the nonlinear case and the requirement to the spatial localisation modify one of the localised partial solutions (Airy beam) to the corresponding linear problem and prohibit the existence of other solutions of this class. (laser beams)
Fahmida, Umi; Kolopaking, Risatianti; Santika, Otte; Sriani, Sriani; Umar, Jahja; Htet, Min Kyaw; Ferguson, Elaine
2015-03-01
Complementary feeding recommendations (CFRs) with the use of locally available foods can be developed by using linear programming (LP). Although its potential has been shown for planning phases of food-based interventions, the effectiveness in the community setting has not been tested to our knowledge. We aimed to assess effectiveness of promoting optimized CFRs for improving maternal knowledge, feeding practices, and child intakes of key problem nutrients (calcium, iron, niacin, and zinc). A community-intervention trial with a quasi-experimental design was conducted in East Lombok, West Nusa Tenggara Province, Indonesia, on children aged 9-16 mo at baseline. A CFR group (n = 240) was compared with a non-CFR group (n = 215). The CFRs, which were developed using LP, were promoted in an intervention that included monthly cooking sessions and weekly home visits. The mother's nutrition knowledge and her child's feeding practices and the child's nutrient intakes were measured before and after the 6-mo intervention by using a structured interview, 24-h recall, and 1-wk food-frequency questionnaire. The CFR intervention improved mothers' knowledge and children's feeding practices and improved children's intakes of calcium, iron, and zinc. At the end line, median (IQR) nutrient densities were significantly higher in the CFR group than in the non-CFR group for iron [i.e., 0.6 mg/100 kcal (0.4-0.8 mg/100 kcal) compared with 0.5 mg/100 kcal (0.4-0.7 mg/100 kcal)] and niacin [i.e., 0.8 mg/100 kcal (0.5-1.0 mg/100 kcal) compared with 0.6 mg/100 kcal (0.4-0.8 mg/100 kcal)]. However, median nutrient densities for calcium, iron, niacin, and zinc in the CFR group (23, 0.6, 0.7, and 0.5 mg/100 kcal, respectively) were still below desired densities (63, 1.0, 0.9, and 0.6 mg/100 kcal, respectively). The CFRs significantly increased intakes of calcium, iron, niacin, and zinc, but nutrient densities were still below desired nutrient densities. When the adoption of optimized CFRs is
Linear quadratic optimal control for symmetric systems
NASA Technical Reports Server (NTRS)
Lewis, J. H.; Martin, C. F.
1983-01-01
Special symmetries are present in many control problems. This paper addresses the problem of determining linear-quadratic optimal control problems whose solutions preserve the symmetry of the initial linear control system.
Wave-particle dualism and complementarity unraveled by a different mode.
Menzel, Ralf; Puhlmann, Dirk; Heuer, Axel; Schleich, Wolfgang P
2012-06-12
The precise knowledge of one of two complementary experimental outcomes prevents us from obtaining complete information about the other one. This formulation of Niels Bohr's principle of complementarity when applied to the paradigm of wave-particle dualism--that is, to Young's double-slit experiment--implies that the information about the slit through which a quantum particle has passed erases interference. In the present paper we report a double-slit experiment using two photons created by spontaneous parametric down-conversion where we observe interference in the signal photon despite the fact that we have located it in one of the slits due to its entanglement with the idler photon. This surprising aspect of complementarity comes to light by our special choice of the TEM(01) pump mode. According to quantum field theory the signal photon is then in a coherent superposition of two distinct wave vectors giving rise to interference fringes analogous to two mechanical slits.
Frogs and ponds: a multilevel analysis of the regulatory mode complementarity hypothesis.
Pierro, Antonio; Presaghi, Fabio; Higgins, E Tory; Klein, Kristen M; Kruglanski, Arie W
2012-02-01
Regulatory mode is a psychological construct pertaining to the self-regulatory orientation of individuals or teams engaged in goal pursuit. Locomotion, the desire for continuous progress or movement in goal pursuit, and assessment, the desire to critically evaluate and compare goals and means, are orthogonal regulatory modes. However, they are also complementary, in that both locomotion and assessment are necessary for effectual goal pursuit. In the present research, the authors sought to demonstrate that multilevel regulatory mode complementarity can positively affect individual-level performance on goal-relevant tasks. The authors recruited 289 employees (177 men, 112 women) from preexisting work teams in workplace organizations in Italy and obtained (a) employees' individual-level scores on the Regulatory Mode Scale and (b) supervisor ratings of each employee's work performance. The results supported the multilevel complementarity hypothesis for regulatory mode. Limitations and recommendations for future research are discussed.
New measures for estimating surface complementarity and packing at protein-protein interfaces.
Mitra, Pralay; Pal, Debnath
2010-03-19
A number of methods exist that use different approaches to assess geometric properties like the surface complementarity and atom packing at the protein-protein interface. We have developed two new and conceptually different measures using the Delaunay tessellation and interface slice selection to compute the surface complementarity and atom packing at the protein-protein interface in a straightforward manner. Our measures show a strong correlation among themselves and with other existing measures, and can be calculated in a highly time-efficient manner. The measures are discriminative for evaluating biological, as well as non-biological protein-protein contacts, especially from large protein complexes and large-scale structural studies (http://pallab.serc.iisc.ernet.in/nip_nsc). Copyright 2010 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Complementarity of PALM and SOFI for super-resolution live-cell imaging of focal adhesions
NASA Astrophysics Data System (ADS)
Deschout, Hendrik; Lukes, Tomas; Sharipov, Azat; Szlag, Daniel; Feletti, Lely; Vandenberg, Wim; Dedecker, Peter; Hofkens, Johan; Leutenegger, Marcel; Lasser, Theo; Radenovic, Aleksandra
2016-12-01
Live-cell imaging of focal adhesions requires a sufficiently high temporal resolution, which remains a challenge for super-resolution microscopy. Here we address this important issue by combining photoactivated localization microscopy (PALM) with super-resolution optical fluctuation imaging (SOFI). Using simulations and fixed-cell focal adhesion images, we investigate the complementarity between PALM and SOFI in terms of spatial and temporal resolution. This PALM-SOFI framework is used to image focal adhesions in living cells, while obtaining a temporal resolution below 10 s. We visualize the dynamics of focal adhesions, and reveal local mean velocities around 190 nm min-1. The complementarity of PALM and SOFI is assessed in detail with a methodology that integrates a resolution and signal-to-noise metric. This PALM and SOFI concept provides an enlarged quantitative imaging framework, allowing unprecedented functional exploration of focal adhesions through the estimation of molecular parameters such as fluorophore densities and photoactivation or photoswitching kinetics.
Plant diversity increases spatio-temporal niche complementarity in plant-pollinator interactions.
Venjakob, Christine; Klein, Alexandra-Maria; Ebeling, Anne; Tscharntke, Teja; Scherber, Christoph
2016-04-01
Ongoing biodiversity decline impairs ecosystem processes, including pollination. Flower visitation, an important indicator of pollination services, is influenced by plant species richness. However, the spatio-temporal responses of different pollinator groups to plant species richness have not yet been analyzed experimentally. Here, we used an experimental plant species richness gradient to analyze plant-pollinator interactions with an unprecedented spatio-temporal resolution. We observed four pollinator functional groups (honeybees, bumblebees, solitary bees, and hoverflies) in experimental plots at three different vegetation strata between sunrise and sunset. Visits were modified by plant species richness interacting with time and space. Furthermore, the complementarity of pollinator functional groups in space and time was stronger in species-rich mixtures. We conclude that high plant diversity should ensure stable pollination services, mediated via spatio-temporal niche complementarity in flower visitation.
Complementarity of resonant and nonresonant strong WW scattering at SSC and LHC
Chanowitz, M.S.
1992-08-01
Signals and backgrounds for strong WW scattering at the SSC and LHC are considered. Complementarity of resonant signals in the I = 1 WZ channel and nonresonant signals in the I = 2 W{sup +}W{sup +} channel is illustrated using a chiral lagrangian with a J = 1 ``p`` resonance. Results are presented for purely leptonic final states in the W{plus_minus}Z, W{sup +}W{sup +} + W{sup {minus}}W{sup {minus}}, and ZZ channels.
Complementarity of resonant and nonresonant strong WW scattering at SSC and LHC
Chanowitz, M.S.
1992-08-01
Signals and backgrounds for strong WW scattering at the SSC and LHC are considered. Complementarity of resonant signals in the I = 1 WZ channel and nonresonant signals in the I = 2 W[sup +]W[sup +] channel is illustrated using a chiral lagrangian with a J = 1 p'' resonance. Results are presented for purely leptonic final states in the W[plus minus]Z, W[sup +]W[sup +] + W[sup [minus
Todres, L; Wheeler, S
2001-02-01
The focus of this paper draws on the thinking of Husserl, Dilthey and Heidegger to identify elements of the phenomenological movement that can provide focus and direction for qualitative research in nursing. The authors interpret this tradition in two ways: emphasizing the possible complementarity of phenomenology, hermeneutics and existentialism, and demonstrating how these emphases ask for grounding, reflexivity and humanization in qualitative research. The paper shows that the themes of grounding, reflexivity and humanization are particularly important for nursing research.
Brown, Marion B; Schlacher, Thomas A; Schoeman, David S; Weston, Michael A; Huijbers, Chantal M; Olds, Andrew D; Connolly, Rod M
2015-10-01
Species composition is expected to alter ecological function in assemblages if species traits differ strongly. Such effects are often large and persistent for nonnative carnivores invading islands. Alternatively, high similarity in traits within assemblages creates a degree of functional redundancy in ecosystems. Here we tested whether species turnover results in functional ecological equivalence or complementarity, and whether invasive carnivores on islands significantly alter such ecological function. The model system consisted of vertebrate scavengers (dominated by raptors) foraging on animal carcasses on ocean beaches on two Australian islands, one with and one without invasive red foxes (Vulpes vulpes). Partitioning of scavenging events among species, carcass removal rates, and detection speeds were quantified using camera traps baited with fish carcasses at the dune-beach interface. Complete segregation of temporal foraging niches between mammals (nocturnal) and birds (diurnal) reflects complementarity in carrion utilization. Conversely, functional redundancy exists within the bird guild where several species of raptors dominate carrion removal in a broadly similar way. As predicted, effects of red foxes were large. They substantially changed the nature and rate of the scavenging process in the system: (1) foxes consumed over half (55%) of all carrion available at night, compared with negligible mammalian foraging at night on the fox-free island, and (2) significant shifts in the composition of the scavenger assemblages consuming beach-cast carrion are the consequence of fox invasion at one island. Arguably, in the absence of other mammalian apex predators, the addition of red foxes creates a new dimension of functional complementarity in beach food webs. However, this functional complementarity added by foxes is neither benign nor neutral, as marine carrion subsidies to coastal red fox populations are likely to facilitate their persistence as exotic
Kraut, Daniel A; Sigala, Paul A; Pybus, Brandon; Liu, Corey W; Ringe, Dagmar; Petsko, Gregory A
2006-01-01
A longstanding proposal in enzymology is that enzymes are electrostatically and geometrically complementary to the transition states of the reactions they catalyze and that this complementarity contributes to catalysis. Experimental evaluation of this contribution, however, has been difficult. We have systematically dissected the potential contribution to catalysis from electrostatic complementarity in ketosteroid isomerase. Phenolates, analogs of the transition state and reaction intermediate, bind and accept two hydrogen bonds in an active site oxyanion hole. The binding of substituted phenolates of constant molecular shape but increasing p K a models the charge accumulation in the oxyanion hole during the enzymatic reaction. As charge localization increases, the NMR chemical shifts of protons involved in oxyanion hole hydrogen bonds increase by 0.50–0.76 ppm/p K a unit, suggesting a bond shortening of ˜0.02 Å/p K a unit. Nevertheless, there is little change in binding affinity across a series of substituted phenolates (ΔΔG = −0.2 kcal/mol/p K a unit). The small effect of increased charge localization on affinity occurs despite the shortening of the hydrogen bonds and a large favorable change in binding enthalpy (ΔΔH = −2.0 kcal/mol/p K a unit). This shallow dependence of binding affinity suggests that electrostatic complementarity in the oxyanion hole makes at most a modest contribution to catalysis of ˜300-fold. We propose that geometrical complementarity between the oxyanion hole hydrogen-bond donors and the transition state oxyanion provides a significant catalytic contribution, and suggest that KSI, like other enzymes, achieves its catalytic prowess through a combination of modest contributions from several mechanisms rather than from a single dominant contribution. PMID:16602823
Kraut, Daniel A; Sigala, Paul A; Pybus, Brandon; Liu, Corey W; Ringe, Dagmar; Petsko, Gregory A; Herschlag, Daniel
2006-04-01
A longstanding proposal in enzymology is that enzymes are electrostatically and geometrically complementary to the transition states of the reactions they catalyze and that this complementarity contributes to catalysis. Experimental evaluation of this contribution, however, has been difficult. We have systematically dissected the potential contribution to catalysis from electrostatic complementarity in ketosteroid isomerase. Phenolates, analogs of the transition state and reaction intermediate, bind and accept two hydrogen bonds in an active site oxyanion hole. The binding of substituted phenolates of constant molecular shape but increasing pK(a) models the charge accumulation in the oxyanion hole during the enzymatic reaction. As charge localization increases, the NMR chemical shifts of protons involved in oxyanion hole hydrogen bonds increase by 0.50-0.76 ppm/pK(a) unit, suggesting a bond shortening of 0.02 A/pK(a) unit. Nevertheless, there is little change in binding affinity across a series of substituted phenolates (DeltaDeltaG = -0.2 kcal/mol/pK(a) unit). The small effect of increased charge localization on affinity occurs despite the shortening of the hydrogen bonds and a large favorable change in binding enthalpy (DeltaDeltaH = -2.0 kcal/mol/pK(a) unit). This shallow dependence of binding affinity suggests that electrostatic complementarity in the oxyanion hole makes at most a modest contribution to catalysis of 300-fold. We propose that geometrical complementarity between the oxyanion hole hydrogen-bond donors and the transition state oxyanion provides a significant catalytic contribution, and suggest that KSI, like other enzymes, achieves its catalytic prowess through a combination of modest contributions from several mechanisms rather than from a single dominant contribution.
Complementarity among four highly productive grassland species depends on resource availability.
Roscher, Christiane; Schmid, Bernhard; Kolle, Olaf; Schulze, Ernst-Detlef
2016-06-01
Positive species richness-productivity relationships are common in biodiversity experiments, but how resource availability modifies biodiversity effects in grass-legume mixtures composed of highly productive species is yet to be explicitly tested. We addressed this question by choosing two grasses (Arrhenatherum elatius and Dactylis glomerata) and two legumes (Medicago × varia and Onobrychis viciifolia) which are highly productive in monocultures and dominant in mixtures (the Jena Experiment). We established monocultures, all possible two- and three-species mixtures, and the four-species mixture under three different resource supply conditions (control, fertilization, and shading). Compared to the control, community biomass production decreased under shading (-56 %) and increased under fertilization (+12 %). Net diversity effects (i.e., mixture minus mean monoculture biomass) were positive in the control and under shading (on average +15 and +72 %, respectively) and negative under fertilization (-10 %). Positive complementarity effects in the control suggested resource partitioning and facilitation of growth through symbiotic N2 fixation by legumes. Positive complementarity effects under shading indicated that resource partitioning is also possible when growth is carbon-limited. Negative complementarity effects under fertilization suggested that external nutrient supply depressed facilitative grass-legume interactions due to increased competition for light. Selection effects, which quantify the dominance of species with particularly high monoculture biomasses in the mixture, were generally small compared to complementarity effects, and indicated that these species had comparable competitive strengths in the mixture. Our study shows that resource availability has a strong impact on the occurrence of positive diversity effects among tall and highly productive grass and legume species.