Solving of variational inequalities by reducing to the linear complementarity problem
NASA Astrophysics Data System (ADS)
Gabidullina, Z. R.
2016-11-01
We study the variational inequalities closely connected with the linear separation problem of the convex polyhedrain the Euclidean space. For solving of these inequalities, we apply the reduction to the linear complementarity problem. Such reduction allows one to solve the variational inequalities with the help of the Matlab software package.
A Self-Adaptive Projection and Contraction Method for Linear Complementarity Problems
Liao Lizhi Wang Shengli
2003-10-15
In this paper we develop a self-adaptive projection and contraction method for the linear complementarity problem (LCP). This method improves the practical performance of the modified projection and contraction method by adopting a self-adaptive technique. The global convergence of our new method is proved under mild assumptions. Our numerical tests clearly demonstrate the necessity and effectiveness of our proposed method.
The Solution of Linear Complementarity Problems on an Array Processor.
1981-01-01
because it is a test problem which has been solved by many authors). The measured time per iteration on the pilot DAP was 2.2ms, as compared to the...subroutine is given in Figure 4.3, and a full listing of the program is given in Appendix D. To save time the test for convergence is executed only every...additional testing , it is assumed that a copy of the top plane is stored above the top plane. The two implementations were run on the problem with H - 10, h
1983-08-01
earlier paper, our present analysis applies only to the symmetric linear complementarity problem . Various applications to a strictly convex quadratic...characterization requires no such constraint qualification as (F). There is yet another approach to apply an iterative method for solving the strictly convex ...described a Lagrangian relaxation algorithm for a constrained matrix problem which is formulated as a strictly convex separable quadratic program. They
Fernandes, L.; Friedlander, A.; Guedes, M.; Judice, J.
2001-07-01
This paper addresses a General Linear Complementarity Problem (GLCP) that has found applications in global optimization. It is shown that a solution of the GLCP can be computed by finding a stationary point of a differentiable function over a set defined by simple bounds on the variables. The application of this result to the solution of bilinear programs and LCPs is discussed. Some computational evidence of its usefulness is included in the last part of the paper.
1980-10-01
SOLRED 53. C NFGSW =1 DAM PROBLEM 54. C =2 PROBLEM (5.3),(5.4). 55. C 56. C 57. C NINTSW =1 INJECTION. SUBROUTINE INTADD -77- ==.... APPX-C-PFMG 58. C =2...ARRAY Q SIZE SHOULD BE AT LEAST =1, 110) 256. STOP 257. 92 CONTINUE 258. C 259. C 260. CALL SOLRED 261. C 262. C INITIALIZE 263. WU=0 264. CALL...CONTINUE 996. PRINT *,(QTEM(LL),LL=1,L) 997. 20 CONTINUE 998. CALL URTIMS(TIME) 999. RETURN 1000. END 1001. C 1002. C 1003. C 1004. SUBROUTINE SOLRED
1987-05-01
Laboratory •U The Equivmluce of Dentig’$ Self-Dual Parametric Algorithm for Linemr Progame to Lems Ngorithm fur Lse Cemllusemtawt Problems Appled to Umer...agement Science 11, pp. 681-689. Lemke, C.E. (1970). "Recent results on complementarity problems," in Nonlinear pro- gramming (J.B. Rosen, O.L
New Existence Conditions for Order Complementarity Problems
NASA Astrophysics Data System (ADS)
Németh, S. Z.
2009-09-01
Complementarity problems are mathematical models of problems in economics, engineering and physics. A special class of complementarity problems are the order complementarity problems [2]. Order complementarity problems can be applied in lubrication theory [6] and economics [1]. The notion of exceptional family of elements for general order complementarity problems in Banach spaces will be introduced. It will be shown that for general order complementarity problems defined by completely continuous fields the problem has either a solution or an exceptional family of elements (for other notions of exceptional family of elements see [1, 2, 3, 4] and the related references therein). This solves a conjecture of [2] about the existence of exceptional family of elements for order complementarity problems. The proof can be done by using the Leray-Schauder alternative [5]. An application to integral operators will be given.
Generalized quasi-variational inequality and implicit complementarity problems
Yao, Jen-Chih.
1989-10-01
A new problem called the generalized quasi-variational inequality problem is introduced. This new formulation extends all kinds of variational inequality problem formulations that have been introduced and enlarges the class of problems that can be approached by the variational inequality problem formulation. Existence results without convexity assumptions are established and topological properties of the solution set are investigated. A new problem called the generalized implicit complementarity problem is also introduced which generalizes all the complementarity problem formulations that have been introduced. Applications of generalized quasi-variational inequality and implicit complementarity problems are given. 43 refs.
A basic theorem of complementarity for the generalized variational-like inequality problem
Yao, Jen-Chih.
1989-11-01
In this report, a basic theorem of complementarity is established for the generalized variational-like inequality problem introduced by Parida and Sen. Some existence results for both generalized variational inequality and complementarity problems are established by employing this basic theorem of complementarity. In particular, some sets of conditions that are normally satisfied by a nonsolvable generalized complementarity problem are investigated. 16 refs.
Levenberg-Marquardt method for the eigenvalue complementarity problem.
Chen, Yuan-yuan; Gao, Yan
2014-01-01
The eigenvalue complementarity problem (EiCP) is a kind of very useful model, which is widely used in the study of many problems in mechanics, engineering, and economics. The EiCP was shown to be equivalent to a special nonlinear complementarity problem or a mathematical programming problem with complementarity constraints. The existing methods for solving the EiCP are all nonsmooth methods, including nonsmooth or semismooth Newton type methods. In this paper, we reformulate the EiCP as a system of continuously differentiable equations and give the Levenberg-Marquardt method to solve them. Under mild assumptions, the method is proved globally convergent. Finally, some numerical results and the extensions of the method are also given. The numerical experiments highlight the efficiency of the method.
A second order cone complementarity approach for the numerical solution of elastoplasticity problems
NASA Astrophysics Data System (ADS)
Zhang, L. L.; Li, J. Y.; Zhang, H. W.; Pan, S. H.
2013-01-01
In this paper we present a new approach for solving elastoplastic problems as second order cone complementarity problems (SOCCPs). Specially, two classes of elastoplastic problems, i.e. the J 2 plasticity problems with combined linear kinematic and isotropic hardening laws and the Drucker-Prager plasticity problems with associative or non-associative flow rules, are taken as the examples to illustrate the main idea of our new approach. In the new approach, firstly, the classical elastoplastic constitutive equations are equivalently reformulated as second order cone complementarity conditions. Secondly, by employing the finite element method and treating the nodal displacements and the plasticity multiplier vectors of Gaussian integration points as the unknown variables, we obtain a standard SOCCP formulation for the elastoplasticity analysis, which enables the using of general SOCCP solvers developed in the field of mathematical programming be directly available in the field of computational plasticity. Finally, a semi-smooth Newton algorithm is suggested to solve the obtained SOCCPs. Numerical results of several classical plasticity benchmark problems confirm the effectiveness and robustness of the SOCCP approach.
Pseudo-Monotone Complementarity Problems in Hilbert Space
1990-07-01
have the same solution set . Therefore, one approach to studying NCP is by studying VIP over closed convex cones. The purpose of this paper is to use...interior and relative boundary of B in K, respectively. The set K\\B denotes the complement of B in K. A subset of a Hilbert space is said to be 2...conditions for the existence of solutions to the variational inequality problem for unbounded sets . Theorem 2.2. Let K be a closed convex subset of the
Bruhn, Peter; Geyer-Schulz, Andreas
2002-01-01
In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.
Huang, Kuo -Ling; Mehrotra, Sanjay
2016-11-08
We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less
Huang, Kuo -Ling; Mehrotra, Sanjay
2016-11-08
We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadratic programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).
NASA Astrophysics Data System (ADS)
Kastner, R. E.
It is argued that Niels Bohr ultimately arrived at positivistic and antirealist-flavored statements because of weaknesses in his initial objective of accounting for measurement in physical terms. Bohr's investigative approach faced a dilemma, the choices being (i) conceptual inconsistency or (ii) taking the classical realm as primitive. In either case, Bohr's "Complementarity" does not adequately explain or account for the emergence of a oscopic, classical domain from a microscopic domain described by quantum mechanics. A diagnosis of the basic problem is offered, and an alternative way forward is indicated.
A superlinear infeasible-interior-point algorithm for monotone complementarity problems
Wright, S.; Ralph, D.
1996-11-01
We use the globally convergent framework proposed by Kojima, Noma, and Yoshise to construct an infeasible-interior-point algorithm for monotone nonlinear complementary problems. Superlinear convergence is attained when the solution is nondegenerate and also when the problem is linear. Numerical experiments confirm the efficacy of the proposed approach.
Linearization problem in pseudolite surveys
NASA Astrophysics Data System (ADS)
Cellmer, Slawomir; Rapinski, Jacek
2010-06-01
GPS augmented with pseudolites (PL), can be used in various engineering surveys. Also pseudolite—only navigation system can be designed and used in any place, even if GPS signal is not available (Kee et al. Development of indoor navigation system using asynchronous pseudolites, 1038-1045, 2000). Especially in engineering surveys, where harsh survey environment is common, pseudolites have a lot of applications. Pseudolites may be used in construction sites, open pit mines, city canyons, GPS and PL baseline processing is similar, although there are few differences that must be taken into account. One of the major issues is linearization problem. The source of the problem is neglecting second terms of Taylor series expansion in GPS baseline processing software. This problem occurs when the pseudolite is relatively close to the receiver, which is the case in PL surveys. In this paper authors presents the algorithm for GPS + PL data processing including, neglected in classical GPS only approach, second terms of Taylor series expansion. The mathematical model of adjustment problem, detailed proposal of application in baseline processing algorithms, and numerical tests are presented.
Can linear superiorization be useful for linear optimization problems?
NASA Astrophysics Data System (ADS)
Censor, Yair
2017-04-01
Linear superiorization (LinSup) considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are: (i) does LinSup provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? (ii) How does LinSup fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: ‘yes’ and ‘very well’, respectively.
Stochastic Linear Quadratic Optimal Control Problems
Chen, S.; Yong, J.
2001-07-01
This paper is concerned with the stochastic linear quadratic optimal control problem (LQ problem, for short) for which the coefficients are allowed to be random and the cost functional is allowed to have a negative weight on the square of the control variable. Some intrinsic relations among the LQ problem, the stochastic maximum principle, and the (linear) forward-backward stochastic differential equations are established. Some results involving Riccati equation are discussed as well.
Higher order sensitivity of solutions to convex programming problems without strict complementarity
NASA Technical Reports Server (NTRS)
Malanowski, Kazimierz
1988-01-01
Consideration is given to a family of convex programming problems which depend on a vector parameter. It is shown that the solutions of the problems and the associated Lagrange multipliers are arbitrarily many times directionally differentiable functions of the parameter, provided that the data of the problems are sufficiently regular. The characterizations of the respective derivatives are given.
Quantum Algorithm for Linear Programming Problems
NASA Astrophysics Data System (ADS)
Joag, Pramod; Mehendale, Dhananjay
The quantum algorithm (PRL 103, 150502, 2009) solves a system of linear equations with exponential speedup over existing classical algorithms. We show that the above algorithm can be readily adopted in the iterative algorithms for solving linear programming (LP) problems. The first iterative algorithm that we suggest for LP problem follows from duality theory. It consists of finding nonnegative solution of the equation forduality condition; forconstraints imposed by the given primal problem and for constraints imposed by its corresponding dual problem. This problem is called the problem of nonnegative least squares, or simply the NNLS problem. We use a well known method for solving the problem of NNLS due to Lawson and Hanson. This algorithm essentially consists of solving in each iterative step a new system of linear equations . The other iterative algorithms that can be used are those based on interior point methods. The same technique can be adopted for solving network flow problems as these problems can be readily formulated as LP problems. The suggested quantum algorithm cansolveLP problems and Network Flow problems of very large size involving millions of variables.
The linear separability problem: some testing methods.
Elizondo, D
2006-03-01
The notion of linear separability is used widely in machine learning research. Learning algorithms that use this concept to learn include neural networks (single layer perceptron and recursive deterministic perceptron), and kernel machines (support vector machines). This paper presents an overview of several of the methods for testing linear separability between two classes. The methods are divided into four groups: Those based on linear programming, those based on computational geometry, one based on neural networks, and one based on quadratic programming. The Fisher linear discriminant method is also presented. A section on the quantification of the complexity of classification problems is included.
The report discusses a two person max -min problem in which the maximizing player moves first and the minimizing player has perfect information of the...The joint constraints as well as the objective function are assumed to be linear. For this problem it is shown that the familiar inequality min max ...or = max min is reversed due to the influence of the joint constraints. The problem is characterized as a nonconvex program and a method of
The Vertical Linear Fractional Initialization Problem
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Hartley, Tom T.
1999-01-01
This paper presents a solution to the initialization problem for a system of linear fractional-order differential equations. The scalar problem is considered first, and solutions are obtained both generally and for a specific initialization. Next the vector fractional order differential equation is considered. In this case, the solution is obtained in the form of matrix F-functions. Some control implications of the vector case are discussed. The suggested method of problem solution is shown via an example.
Drinkers and Bettors: Investigating the Complementarity of Alcohol Consumption and Problem Gambling
Maclean, Johanna Catherine; Ettner, Susan L.
2009-01-01
Regulated gambling is a multi-billion dollar industry in the United States with greater than 100 percent increases in revenue over the past decade. Along with this rise in gambling popularity and gaming options comes an increased risk of addiction and the associated social costs. This paper focuses on the effect of alcohol use on gambling-related problems. Variables correlated with both alcohol use and gambling may be difficult to observe, and the inability to include these items in empirical models may bias coefficient estimates. After addressing the endogeneity of alcohol use when appropriate, we find strong evidence that problematic gambling and alcohol consumption are complementary activities. PMID:18430523
Linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, F. K. B.
1980-01-01
Problem involves design of controls for linear time-invariant system disturbed by white noise. Solution is Kalman filter coupled through set of optimal regulator gains to produce desired control signal. Key to solution is solving matrix Riccati differential equation. LSOCE effectively solves problem for wide range of practical applications. Program is written in FORTRAN IV for batch execution and has been implemented on IBM 360.
The Convex Geometry of Linear Inverse Problems
2010-12-02
The Convex Geometry of Linear Inverse Problems Venkat Chandrasekaranm, Benjamin Rechtw, Pablo A. Parrilom, and Alan S. Willskym ∗ m Laboratory for...83) = 3r(m1 +m2 − r) + 2(m1 − r − r2) (84) where the second inequality follows from the fact that (a+ b)2 ≤ 2a2 + 2b2. References [1] Aja- Fernandez , S
Numerical stability in problems of linear algebra.
NASA Technical Reports Server (NTRS)
Babuska, I.
1972-01-01
Mathematical problems are introduced as mappings from the space of input data to that of the desired output information. Then a numerical process is defined as a prescribed recurrence of elementary operations creating the mapping of the underlying mathematical problem. The ratio of the error committed by executing the operations of the numerical process (the roundoff errors) to the error introduced by perturbations of the input data (initial error) gives rise to the concept of lambda-stability. As examples, several processes are analyzed from this point of view, including, especially, old and new processes for solving systems of linear algebraic equations with tridiagonal matrices. In particular, it is shown how such a priori information can be utilized as, for instance, a knowledge of the row sums of the matrix. Information of this type is frequently available where the system arises in connection with the numerical solution of differential equations.
Menu-Driven Solver Of Linear-Programming Problems
NASA Technical Reports Server (NTRS)
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
A multistage linear array assignment problem
NASA Technical Reports Server (NTRS)
Nicol, David M.; Shier, D. R.; Kincaid, R. K.; Richards, D. S.
1988-01-01
The implementation of certain algorithms on parallel processing computing architectures can involve partitioning contiguous elements into a fixed number of groups, each of which is to be handled by a single processor. It is desired to find an assignment of elements to processors that minimizes the sum of the maximum workloads experienced at each stage. This problem can be viewed as a multi-objective network optimization problem. Polynomially-bounded algorithms are developed for the case of two stages, whereas the associated decision problem (for an arbitrary number of stages) is shown to be NP-complete. Heuristic procedures are therefore proposed and analyzed for the general problem. Computational experience with one of the exact problems, incorporating certain pruning rules, is presented with one of the exact problems. Empirical results also demonstrate that one of the heuristic procedures is especially effective in practice.
Linear inverse problem of the reactor dynamics
NASA Astrophysics Data System (ADS)
Volkov, N. P.
2017-01-01
The aim of this work is the study transient processes in nuclear reactors. The mathematical model of the reactor dynamics excluding reverse thermal coupling is investigated. This model is described by a system of integral-differential equations, consisting of a non-stationary anisotropic multispeed kinetic transport equation and a delayed neutron balance equation. An inverse problem was formulated to determine the stationary part of the function source along with the solution of the direct problem. The author obtained sufficient conditions for the existence and uniqueness of a generalized solution of this inverse problem.
Complementarity, Sets and Numbers
ERIC Educational Resources Information Center
Otte, M.
2003-01-01
Niels Bohr's term "complementarity" has been used by several authors to capture the essential aspects of the cognitive and epistemological development of scientific and mathematical concepts. In this paper we will conceive of complementarity in terms of the dual notions of extension and intension of mathematical terms. A complementarist approach…
Symmetry Groups for Linear Programming Relaxations of Orthogonal Array Problems
2015-03-26
Symmetry Groups for Linear Programming Relaxations of Orthogonal Array Problems THESIS MARCH 2015 David M. Arquette, Second Lieutenant, USAF AFIT-ENC...work of the U.S. Government and is not subject to copyright protection in the United States. AFIT-ENC-MS-15-M-003 SYMMETRY GROUPS FOR LINEAR...PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENC-MS-15-M-003 SYMMETRY GROUPS FOR LINEAR PROGRAMMING RELAXATIONS OF ORTHOGONAL ARRAY PROBLEMS David M
A piecewise linear approximation scheme for hereditary optimal control problems
NASA Technical Reports Server (NTRS)
Cliff, E. M.; Burns, J. A.
1977-01-01
An approximation scheme based on 'piecewise linear' approximations of L2 spaces is employed to formulate a numerical method for solving quadratic optimal control problems governed by linear retarded functional differential equations. This piecewise linear method is an extension of the so called averaging technique. It is shown that the Riccati equation for the linear approximation is solved by simple transformation of the averaging solution. Thus, the computational requirements are essentially the same. Numerical results are given.
Singular linear-quadratic control problem for systems with linear delay
Sesekin, A. N.
2013-12-18
A singular linear-quadratic optimization problem on the trajectories of non-autonomous linear differential equations with linear delay is considered. The peculiarity of this problem is the fact that this problem has no solution in the class of integrable controls. To ensure the existence of solutions is required to expand the class of controls including controls with impulse components. Dynamical systems with linear delay are used to describe the motion of pantograph from the current collector with electric traction, biology, etc. It should be noted that for practical problems fact singularity criterion of quality is quite commonly occurring, and therefore the study of these problems is surely important. For the problem under discussion optimal programming control contained impulse components at the initial and final moments of time is constructed under certain assumptions on the functional and the right side of the control system.
NASA Astrophysics Data System (ADS)
Howard, Don
2013-04-01
Complementarity is Niels Bohr's most original contribution to the interpretation of quantum mechanics, but there is widespread confusion about complementarity in the popular literature and even in some of the serious scholarly literature on Bohr. This talk provides a historically grounded guide to Bohr's own understanding of the doctrine, emphasizing the manner in which complementarity is deeply rooted in the physics of the quantum world, in particular the physics of entanglement, and is, therefore, not just an idiosyncratic philosophical addition. Among the more specific points to be made are that complementarity is not to be confused with wave-particle duality, that it is importantly different from Heisenberg's idea of observer-induced limitations on measurability, and that it is in no way an expression of a positivist philosophical project.
Multisplitting for linear, least squares and nonlinear problems
Renaut, R.
1996-12-31
In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.
Experiences with linear solvers for oil reservoir simulation problems
Joubert, W.; Janardhan, R.; Biswas, D.; Carey, G.
1996-12-31
This talk will focus on practical experiences with iterative linear solver algorithms used in conjunction with Amoco Production Company`s Falcon oil reservoir simulation code. The goal of this study is to determine the best linear solver algorithms for these types of problems. The results of numerical experiments will be presented.
Linear Programming and Its Application to Pattern Recognition Problems
NASA Technical Reports Server (NTRS)
Omalley, M. J.
1973-01-01
Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
On the Displacement Problem of Plane Linear Elastostatics
NASA Astrophysics Data System (ADS)
Russo, R.
2010-09-01
We consider the displacement problem of linear elastostatics in a Lipschitz exterior domain of R2. We prove that if the boundary datum a lies in L2(∂Ω), then the problem has a unique very weak solution which converges to an assigned constant vector u∞ at infinity if and if a and u∞ satisfy a suitable compatibility condition.
Solving linear integer programming problems by a novel neural model.
Cavalieri, S
1999-02-01
The paper deals with integer linear programming problems. As is well known, these are extremely complex problems, even when the number of integer variables is quite low. Literature provides examples of various methods to solve such problems, some of which are of a heuristic nature. This paper proposes an alternative strategy based on the Hopfield neural network. The advantage of the strategy essentially lies in the fact that hardware implementation of the neural model allows for the time required to obtain a solution so as not depend on the size of the problem to be solved. The paper presents a particular class of integer linear programming problems, including well-known problems such as the Travelling Salesman Problem and the Set Covering Problem. After a brief description of this class of problems, it is demonstrated that the original Hopfield model is incapable of supplying valid solutions. This is attributed to the presence of constant bias currents in the dynamic of the neural model. A demonstration of this is given and then a novel neural model is presented which continues to be based on the same architecture as the Hopfield model, but introduces modifications thanks to which the integer linear programming problems presented can be solved. Some numerical examples and concluding remarks highlight the solving capacity of the novel neural model.
Efficient numerical methods for entropy-linear programming problems
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Gasnikova, E. B.; Nesterov, Yu. E.; Chernov, A. V.
2016-04-01
Entropy-linear programming (ELP) problems arise in various applications. They are usually written as the maximization of entropy (minimization of minus entropy) under affine constraints. In this work, new numerical methods for solving ELP problems are proposed. Sharp estimates for the convergence rates of the proposed methods are established. The approach described applies to a broader class of minimization problems for strongly convex functionals with affine constraints.
Liang, X B; Si, J
2001-01-01
This paper investigates the existence, uniqueness, and global exponential stability (GES) of the equilibrium point for a large class of neural networks with globally Lipschitz continuous activations including the widely used sigmoidal activations and the piecewise linear activations. The provided sufficient condition for GES is mild and some conditions easily examined in practice are also presented. The GES of neural networks in the case of locally Lipschitz continuous activations is also obtained under an appropriate condition. The analysis results given in the paper extend substantially the existing relevant stability results in the literature, and therefore expand significantly the application range of neural networks in solving optimization problems. As a demonstration, we apply the obtained analysis results to the design of a recurrent neural network (RNN) for solving the linear variational inequality problem (VIP) defined on any nonempty and closed box set, which includes the box constrained quadratic programming and the linear complementarity problem as the special cases. It can be inferred that the linear VIP has a unique solution for the class of Lyapunov diagonally stable matrices, and that the synthesized RNN is globally exponentially convergent to the unique solution. Some illustrative simulation examples are also given.
Singular linear quadratic control problem for systems with linear and constant delay
NASA Astrophysics Data System (ADS)
Sesekin, A. N.; Andreeva, I. Yu.; Shlyakhov, A. S.
2016-12-01
This article is devoted to the singular linear-quadratic optimization problem on the trajectories of the linear non-autonomous system of differential equations with linear and constant delay. It should be noted that such task does not solve the class of integrable controls, so to ensure the existence of a solution is needed to expand the class of controls to include the control impulse components. For the problem under consideration, we have built program control containing impulse components in the initial and final moments time. This is done under certain assumptions on the functional and the right side of the control system.
An Algorithm for Linearly Constrained Nonlinear Programming Programming Problems.
1980-01-01
ALGORITHM FOR LINEARLY CONSTRAINED NONLINEAR PROGRAMMING PROBLEMS Mokhtar S. Bazaraa and Jamie J. Goode In this paper an algorithm for solving a linearly...distance pro- gramr.ing, as in the works of Bazaraa and Goode 12], and Wolfe [16 can be used for solving this problem. Special methods that take advantage of...34 Pacific Journal of Mathematics, Volume 16, pp. 1-3, 1966. 2. M. S. Bazaraa and J. j. Goode, "An Algorithm for Finding the Shortest Element of a
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
2014-06-19
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
NASA Astrophysics Data System (ADS)
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
2014-06-01
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α-. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen's method is employed to find a compromise solution, supported by illustrative numerical example.
Unique radiation problems associated with the SLAC Linear Collider
Jenkins, T.M.; Nelson, W.R.
1987-01-01
The SLAC Linear Collider (SLC) is a variation of a new class of linear colliders whereby two linear accelerators are aimed at each other to collide intense bunches of electrons and positrons together. Conventional storage rings are becoming ever more costly as the energy of the stored beams increases such that the cost of two linear colliders per GeV is less than that of electron-positron storage rings at cm energies above about 100 GeV. The SLC being built at SLAC is designed to achieve a center-of-mass energy of 100 GeV by accelerating intense bunches of particles, both electrons and positrons, in the SLAC linac and transporting them along two different arcs to a point where they are focused to a small radius and made to collide head on. The SLC has two main goals. The first is to develop the physics and technology of linear colliders. The other is to achieve center-of-mass energies above 90 GeV in order to investigate the unification of the weak and electromagnetic interactions in the energy range above 90 GeV; (i.e., Z/sup 0/, etc.). This note discusses a few of the special problems that were encountered by the Radiation Physics group at SLAC during the design and construction of the SLAC Linear Collider. The nature of these problems is discussed along with the methods employed to solve them.
Hierarchical Multiobjective Linear Programming Problems with Fuzzy Domination Structures
NASA Astrophysics Data System (ADS)
Yano, Hitoshi
2010-10-01
In this paper, we focus on hierarchical multiobjective linear programming problems with fuzzy domination structures where multiple decision makers in a hierarchical organization have their own multiple objective linear functions together with common linear constraints. After introducing decision powers and the solution concept based on the α-level set for the fuzzy convex cone Λ which reflects a fuzzy domination structure, we propose a fuzzy approach to obtain a satisfactory solution which reflects not only the hierarchical relationships between multiple decision makers but also their own preferences for their membership functions. In the proposed method, instead of Pareto optimal concept, a generalized Λ˜α-extreme point concept is introduced. In order to obtain a satisfactory solution from among a generalized Λ˜α-extreme point set, an interactive algorithm based on linear programming is proposed, and an interactive processes are demonstrated by means of an illustrative numerical example.
On the linear properties of the nonlinear radiative transfer problem
NASA Astrophysics Data System (ADS)
Pikichyan, H. V.
2016-11-01
In this report, we further expose the assertions made in nonlinear problem of reflection/transmission of radiation from a scattering/absorbing one-dimensional anisotropic medium of finite geometrical thickness, when both of its boundaries are illuminated by intense monochromatic radiative beams. The new conceptual element of well-defined, so-called, linear images is noteworthy. They admit a probabilistic interpretation. In the framework of nonlinear problem of reflection/transmission of radiation, we derive solution which is similar to linear case. That is, the solution is reduced to the linear combination of linear images. By virtue of the physical meaning, these functions describe the reflectivity and transmittance of the medium for a single photon or their beam of unit intensity, incident on one of the boundaries of the layer. Thereby the medium in real regime is still under the bilateral illumination by external exciting radiation of arbitrary intensity. To determine the linear images, we exploit three well known methods of (i) adding of layers, (ii) its limiting form, described by differential equations of invariant imbedding, and (iii) a transition to the, so-called, functional equations of the "Ambartsumyan's complete invariance".
Towards an ideal preconditioner for linearized Navier-Stokes problems
Murphy, M.F.
1996-12-31
Discretizing certain linearizations of the steady-state Navier-Stokes equations gives rise to nonsymmetric linear systems with indefinite symmetric part. We show that for such systems there exists a block diagonal preconditioner which gives convergence in three GMRES steps, independent of the mesh size and viscosity parameter (Reynolds number). While this {open_quotes}ideal{close_quotes} preconditioner is too expensive to be used in practice, it provides a useful insight into the problem. We then consider various approximations to the ideal preconditioner, and describe the eigenvalues of the preconditioned systems. Finally, we compare these preconditioners numerically, and present our conclusions.
An analytically solvable eigenvalue problem for the linear elasticity equations.
Day, David Minot; Romero, Louis Anthony
2004-07-01
Analytic solutions are useful for code verification. Structural vibration codes approximate solutions to the eigenvalue problem for the linear elasticity equations (Navier's equations). Unfortunately the verification method of 'manufactured solutions' does not apply to vibration problems. Verification books (for example [2]) tabulate a few of the lowest modes, but are not useful for computations of large numbers of modes. A closed form solution is presented here for all the eigenvalues and eigenfunctions for a cuboid solid with isotropic material properties. The boundary conditions correspond physically to a greased wall.
A modular hierarchy-based theory of the chemical origins of life based on molecular complementarity.
Root-Bernstein, Robert
2012-12-18
complementarity plays critical roles in the evolution of chemical systems and resolves a significant number of outstanding problems in the emergence of complex systems. All physical and mathematical models of organization within complex systems rely upon nonrandom linkage between components. Molecular complementarity provides a naturally occurring nonrandom linker. More importantly, the formation of hierarchically organized stable modules vastly improves the probability of achieving self-organization, and molecular complementarity provides a mechanism by which hierarchically organized stable modules can form. Finally, modularity based on molecular complementarity produces a means for storing and replicating information. Linear replicating molecules such as DNA or RNA are not required to transmit information from one generation of compounds to the next: compositional replication is as ubiquitous in living systems as genetic replication and is equally important to its functions. Chemical systems composed of complementary modules mediate this compositional replication and gave rise to linear replication schemes. In sum, I propose that molecular complementarity is ubiquitous in living systems because it provides the physicochemical basis for modular, hierarchical ordering and replication necessary for the evolution of the chemical systems upon which life is based. I conjecture that complementarity more generally is an essential agent that mediates evolution at every level of organization.
Positional and impulse strategies for linear problems of motion correction
NASA Astrophysics Data System (ADS)
Ananyev, B. I.; Gredasova, N. V.
2016-12-01
Control problems for a linear system with incomplete information are considered. It is supposed that a linear signal with an additive noise is observed. This noise along with the disturbances in the state equation is bounded by the quadratic constraints. The control action in the state equation may be contained in a compact set. In the second case, the total variation of the control is restricted. This case leads us to a sequence of impulse control actions (delta-functions). For both cases, we obtain the definite relations for optimal control actions that guarantee the minimax value of the terminal functional. We use methods of the control theory under uncertainty and the dynamic programming. Some examples from the theory of the movement of space and flight vehicles are investigated.
A recurrent neural network for solving bilevel linear programming problem.
He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian
2014-04-01
In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.
Extracting Embedded Generalized Networks from Linear Programming Problems.
1984-09-01
E EXTRACTING EMBEDDED GENERALIZED NETWORKS FROM LINEAR PROGRAMMING PROBLEMS by Gerald G. Brown * . ___Richard D. McBride * R. Kevin Wood LcL7...authorized. EA Gerald ’Brown Richar-rD. McBride 46;val Postgrduate School University of Southern California Monterey, California 93943 Los Angeles...REOT UBE . OV S.SF- PERFOING’ CAORG soN UER. 7. AUTNOR(a) S. CONTRACT ON GRANT NUME111() Gerald G. Brown Richard D. McBride S. PERFORMING ORGANIZATION
An Algorithm for Solving Interval Linear Programming Problems
1974-11-01
34regularized" a lä Chames -Cooper so that infeasibility is determined at optimal solution if that is the case. If I(x*(v)) - 0 then x*(v) is an... Chames and Cooper J3]) may be used to compute the new inverse. Theorem 2 The algorithm described above terminates in a finite number of steps...I J 19- REFERENCES 1) A. Ben-Israel and A. Chames , "An Explicit Solution of A Special Class of Linear Programming Problems", Operations
Private algebras in quantum information and infinite-dimensional complementarity
Crann, Jason; Kribs, David W.; Levene, Rupert H.; Todorov, Ivan G.
2016-01-15
We introduce a generalized framework for private quantum codes using von Neumann algebras and the structure of commutants. This leads naturally to a more general notion of complementary channel, which we use to establish a generalized complementarity theorem between private and correctable subalgebras that applies to both the finite and infinite-dimensional settings. Linear bosonic channels are considered and specific examples of Gaussian quantum channels are given to illustrate the new framework together with the complementarity theorem.
Rees algebras, Monomial Subrings and Linear Optimization Problems
NASA Astrophysics Data System (ADS)
Dupont, Luis A.
2010-06-01
In this thesis we are interested in studying algebraic properties of monomial algebras, that can be linked to combinatorial structures, such as graphs and clutters, and to optimization problems. A goal here is to establish bridges between commutative algebra, combinatorics and optimization. We study the normality and the Gorenstein property-as well as the canonical module and the a-invariant-of Rees algebras and subrings arising from linear optimization problems. In particular, we study algebraic properties of edge ideals and algebras associated to uniform clutters with the max-flow min-cut property or the packing property. We also study algebraic properties of symbolic Rees algebras of edge ideals of graphs, edge ideals of clique clutters of comparability graphs, and Stanley-Reisner rings.
Linearization of the boundary-layer equations of the minimum time-to-climb problem
NASA Technical Reports Server (NTRS)
Ardema, M. D.
1979-01-01
Ardema (1974) has formally linearized the two-point boundary value problem arising from a general optimal control problem, and has reviewed the known stability properties of such a linear system. In the present paper, Ardema's results are applied to the minimum time-to-climb problem. The linearized zeroth-order boundary layer equations of the problem are derived and solved.
First integrals for the Kepler problem with linear drag
NASA Astrophysics Data System (ADS)
Margheri, Alessandro; Ortega, Rafael; Rebelo, Carlota
2017-01-01
In this work we consider the Kepler problem with linear drag, and prove the existence of a continuous vector-valued first integral, obtained taking the limit as t→ +∞ of the Runge-Lenz vector. The norm of this first integral can be interpreted as an asymptotic eccentricity e_{∞} with 0≤ e_{∞} ≤ 1. The orbits satisfying e_{∞} <1 approach the singularity by an elliptic spiral and the corresponding solutions x(t)=r(t)e^{iθ (t)} have a norm r( t) that goes to zero like a negative exponential and an argument θ (t) that goes to infinity like a positive exponential. In particular, the difference between consecutive times of passage through the pericenter, say T_{n+1} -T_n, goes to zero as 1/n.
Status Report: Black Hole Complementarity Controversy
NASA Astrophysics Data System (ADS)
Lee, Bum-Hoon; Yeom, Dong-han
2014-01-01
Black hole complementarity was a consensus among string theorists for the interpretation of the information loss problem. However, recently some authors find inconsistency of black hole complementarity: large N rescaling and Almheiri, Marolf, Polchinski and Sully (AMPS) argument. According to AMPS, the horizon should be a firewall so that one cannot penetrate there for consistency. There are some controversial discussions on the firewall. Apart from these papers, the authors suggest an assertion using a semi-regular black hole model and we conclude that the firewall, if it exists, should affect to asymptotic observer. In addition, if any opinion does not consider the duplication experiment and the large N rescaling, then the argument is difficult to accept.
The Afshar Experiment and Complementarity
NASA Astrophysics Data System (ADS)
Kastner, Ruth
2006-03-01
A modified version of Young's experiment by Shahriar Afshar demonstrates that, prior to what appears to be a ``which-way'' measurement, an interference pattern exists. Afshar has claimed that this result constitutes a violation of the Principle of Complementarity. This paper discusses the implications of this experiment and considers how Cramer's Transactional Interpretation easily accomodates the result. It is also shown that the Afshar experiment is isomorphic in key respects to a spin one-half particle prepared as ``spin up along x'' and post- selected in a specific state of spin along z. The terminology ``which way'' or ``which-slit'' is critiqued; it is argued that this usage by both Afshar and his critics is misleading and has contributed to confusion surrounding the interpretation of the experiment. Nevertheless, it is concluded that Bohr would have had no more problem accounting for the Afshar result than he would in accounting for the aforementioned pre- and post- selection spin experiment, in which the particle's preparation state is confirmed by a nondestructive measurement prior to post-selection. In addition, some new inferences about the interpretation of delayed choice experiments are drawn from the analysis.
The intelligence of dual simplex method to solve linear fractional fuzzy transportation problem.
Narayanamoorthy, S; Kalyani, S
2015-01-01
An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example.
The Intelligence of Dual Simplex Method to Solve Linear Fractional Fuzzy Transportation Problem
Narayanamoorthy, S.; Kalyani, S.
2015-01-01
An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example. PMID:25810713
Multigrid approaches to non-linear diffusion problems on unstructured meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
The efficiency of three multigrid methods for solving highly non-linear diffusion problems on two-dimensional unstructured meshes is examined. The three multigrid methods differ mainly in the manner in which the nonlinearities of the governing equations are handled. These comprise a non-linear full approximation storage (FAS) multigrid method which is used to solve the non-linear equations directly, a linear multigrid method which is used to solve the linear system arising from a Newton linearization of the non-linear system, and a hybrid scheme which is based on a non-linear FAS multigrid scheme, but employs a linear solver on each level as a smoother. Results indicate that all methods are equally effective at converging the non-linear residual in a given number of grid sweeps, but that the linear solver is more efficient in cpu time due to the lower cost of linear versus non-linear grid sweeps.
Fixed Point Problems for Linear Transformations on Pythagorean Triples
ERIC Educational Resources Information Center
Zhan, M.-Q.; Tong, J.-C.; Braza, P.
2006-01-01
In this article, an attempt is made to find all linear transformations that map a standard Pythagorean triple (a Pythagorean triple [x y z][superscript T] with y being even) into a standard Pythagorean triple, which have [3 4 5][superscript T] as their fixed point. All such transformations form a monoid S* under matrix product. It is found that S*…
Algebraic complementarity in quantum theory
Petz, Denes
2010-01-15
This paper is an overview of the concept of complementarity, the relation to state estimation, to Connes-Stoermer conditional (or relative) entropy, and to uncertainty relation. Complementary Abelian and noncommutative subalgebras are analyzed. All the known results about complementary decompositions are described and several open questions are included. The paper contains only few proofs, typically references are given.
A linear regression solution to the spatial autocorrelation problem
NASA Astrophysics Data System (ADS)
Griffith, Daniel A.
The Moran Coefficient spatial autocorrelation index can be decomposed into orthogonal map pattern components. This decomposition relates it directly to standard linear regression, in which corresponding eigenvectors can be used as predictors. This paper reports comparative results between these linear regressions and their auto-Gaussian counterparts for the following georeferenced data sets: Columbus (Ohio) crime, Ottawa-Hull median family income, Toronto population density, southwest Ohio unemployment, Syracuse pediatric lead poisoning, and Glasgow standard mortality rates, and a small remotely sensed image of the High Peak district. This methodology is extended to auto-logistic and auto-Poisson situations, with selected data analyses including percentage of urban population across Puerto Rico, and the frequency of SIDs cases across North Carolina. These data analytic results suggest that this approach to georeferenced data analysis offers considerable promise.
The acoustics of a concert hall as a linear problem
NASA Astrophysics Data System (ADS)
Lokki, Tapio; Pätynen, Jukka
2015-01-01
The main purpose of a concert hall is to convey sound from musicians to listeners and to reverberate the music for more pleasant experience in the audience area. This process is linear and can be represented with impulse responses. However, by studying measured and simulated impulse responses for decades, researchers have not been able to exhaustively explain the success and reputation of certain concert halls.
From a Nonlinear, Nonconvex Variational Problem to a Linear, Convex Formulation
Egozcue, J. Meziat, R. Pedregal, P.
2002-12-19
We propose a general approach to deal with nonlinear, nonconvex variational problems based on a reformulation of the problem resulting in an optimization problem with linear cost functional and convex constraints. As a first step we explicitly explore these ideas to some one-dimensional variational problems and obtain specific conclusions of an analytical and numerical nature.
Method for Solving Physical Problems Described by Linear Differential Equations
NASA Astrophysics Data System (ADS)
Belyaev, B. A.; Tyurnev, V. V.
2017-01-01
A method for solving physical problems is suggested in which the general solution of a differential equation in partial derivatives is written in the form of decomposition in spherical harmonics with indefinite coefficients. Values of these coefficients are determined from a comparison of the decomposition with a solution obtained for any simplest particular case of the examined problem. The efficiency of the method is demonstrated on an example of calculation of electromagnetic fields generated by a current-carrying circular wire. The formulas obtained can be used to analyze paths in the near-field magnetic (magnetically inductive) communication systems working in moderately conductive media, for example, in sea water.
Fundamental solution of the problem of linear programming and method of its determination
NASA Technical Reports Server (NTRS)
Petrunin, S. V.
1978-01-01
The idea of a fundamental solution to a problem in linear programming is introduced. A method of determining the fundamental solution and of applying this method to the solution of a problem in linear programming is proposed. Numerical examples are cited.
ERIC Educational Resources Information Center
Kar, Tugrul
2016-01-01
This study examined prospective middle school mathematics teachers' problem-posing skills by investigating their ability to associate linear graphs with daily life situations. Prospective teachers were given linear graphs and asked to pose problems that could potentially be represented by the graphs. Their answers were analyzed in two stages. In…
NASA Technical Reports Server (NTRS)
Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.
1989-01-01
A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Undetermined Coefficient Problems for Quasi-Linear Parabolic Equations
1989-12-18
recovered by an iteration scheme, and give sufficient conditions for the unique solution of the inverse problem. Equation (1.1) describes the evolution of...unique fixed point for T, and give conditions on the data for which such a fixed point exists . The solution can then be obtained by the iteration scheme...the solution pair (u, h) in the one dimensional heat equation subject to the nonlinear boundary conditions u. = h(u) on 002. The value of u(0, t) = 8
Aspects of complementarity and uncertainty
NASA Astrophysics Data System (ADS)
Vathsan, Radhika; Qureshi, Tabish
2016-08-01
The two-slit experiment with quantum particles provides many insights into the behavior of quantum mechanics, including Bohr’s complementarity principle. Here, we analyze Einstein’s recoiling slit version of the experiment and show how the inevitable entanglement between the particle and the recoiling slit as a which-way detector is responsible for complementarity. We derive the Englert-Greenberger-Yasin duality from this entanglement, which can also be thought of as a consequence of sum-uncertainty relations between certain complementary observables of the recoiling slit. Thus, entanglement is an integral part of the which-way detection process, and so is uncertainty, though in a completely different way from that envisaged by Bohr and Einstein.
Usefulness and problems of stereotactic radiosurgery using a linear accelerator.
Naoi, Y; Cho, N; Miyauchi, T; Iizuka, Y; Maehara, T; Katayama, H
1996-01-01
Since the introduction of linac radiosurgery in October 1994, we have treated 27 patients with 36 lesions. We treated nine AVM, 12 metastatic brain tumors, two malignant lymphomas, one anaplastic astrocytoma, two meningiomas, and one brain tumor of unknown pathology. In the follow-up examinations at least five months after treatment, the local control rate was 83% for the metastatic tumors, and two malignant lymphomas disappeared completely. In addition, satisfactory results have been obtained with AVM and other brain tumors without any side effects. In comparison with gamma-knife radiosurgery, linac radiosurgery has some disadvantages such as longer treatment time and cumbersome accuracy control, but if accuracy control is performed periodically, accuracies of 1 mm or less can be obtained. There is some strengths of linac radiosurgery as follow. 1) The acquisition cost is relatively low. 2) Dose distribution are equivalent to gamma-knife. 3) There is no field size limitation. 4) There is great flexibility in beam delivery and linac systems. Radiosurgery using linear accelerators seems to become widely accepted in the future.
An application of a linear programing technique to nonlinear minimax problems
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1973-01-01
A differential correction technique for solving nonlinear minimax problems is presented. The basis of the technique is a linear programing algorithm which solves the linear minimax problem. By linearizing the original nonlinear equations about a nominal solution, both nonlinear approximation and estimation problems using the minimax norm may be solved iteratively. Some consideration is also given to improving convergence and to the treatment of problems with more than one measured quantity. A sample problem is treated with this technique and with the least-squares differential correction method to illustrate the properties of the minimax solution. The results indicate that for the sample approximation problem, the minimax technique provides better estimates than the least-squares method if a sufficient amount of data is used. For the sample estimation problem, the minimax estimates are better if the mathematical model is incomplete.
Towards Resolving the Crab Sigma-Problem: A Linear Accelerator?
NASA Technical Reports Server (NTRS)
Contopoulos, Ioannis; Kazanas, Demosthenes; White, Nicholas E. (Technical Monitor)
2002-01-01
Using the exact solution of the axisymmetric pulsar magnetosphere derived in a previous publication and the conservation laws of the associated MHD flow, we show that the Lorentz factor of the outflowing plasma increases linearly with distance from the light cylinder. Therefore, the ratio of the Poynting to particle energy flux, generically referred to as sigma, decreases inversely proportional to distance, from a large value (typically approx. greater than 10(exp 4)) near the light cylinder to sigma approx. = 1 at a transition distance R(sub trans). Beyond this distance the inertial effects of the outflowing plasma become important and the magnetic field geometry must deviate from the almost monopolar form it attains between R(sub lc), and R(sub trans). We anticipate that this is achieved by collimation of the poloidal field lines toward the rotation axis, ensuring that the magnetic field pressure in the equatorial region will fall-off faster than 1/R(sup 2) (R being the cylindrical radius). This leads both to a value sigma = a(sub s) much less than 1 at the nebular reverse shock at distance R(sub s) (R(sub s) much greater than R(sub trans)) and to a component of the flow perpendicular to the equatorial component, as required by observation. The presence of the strong shock at R = R(sub s) allows for the efficient conversion of kinetic energy into radiation. We speculate that the Crab pulsar is unique in requiring sigma(sub s) approx. = 3 x 10(exp -3) because of its small translational velocity, which allowed for the shock distance R(sub s) to grow to values much greater than R(sub trans).
Gene Golub; Kwok Ko
2009-03-30
The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.
Global symmetry relations in linear and viscoplastic mobility problems
NASA Astrophysics Data System (ADS)
Kamrin, Ken; Goddard, Joe
2014-11-01
The mobility tensor of a textured surface is a homogenized effective boundary condition that describes the effective slip of a fluid adjacent to the surface in terms of an applied shear traction far above the surface. In the Newtonian fluid case, perturbation analysis yields a mobility tensor formula, which suggests that regardless of the surface texture (i.e. nonuniform hydrophobicity distribution and/or height fluctuations) the mobility tensor is always symmetric. This conjecture is verified using a Lorentz reciprocity argument. It motivates the question of whether such symmetries would arise for nonlinear constitutive relations and boundary conditions, where the mobility tensor is not a constant but a function of the applied stress. We show that in the case of a strongly dissipative nonlinear constitutive relation--one whose strain-rate relates to the stress solely through a scalar Edelen potential--and strongly dissipative surface boundary conditions--one whose hydrophobic character is described by a potential relating slip to traction--the mobility function of the surface also maintains tensorial symmetry. By extension, the same variational arguments can be applied in problems such as the permeability tensor for viscoplastic flow through porous media, and we find that similar symmetries arise. These findings could be used to simplify the characterization of viscoplastic drag in various anisotropic media. (Joe Goddard is a former graduate student of Acrivos).
Solution algorithms for non-linear singularly perturbed optimal control problems
NASA Technical Reports Server (NTRS)
Ardema, M. D.
1983-01-01
The applicability and usefulness of several classical and other methods for solving the two-point boundary-value problem which arises in non-linear singularly perturbed optimal control are assessed. Specific algorithms of the Picard, Newton and averaging types are formally developed for this class of problem. The computational requirements associated with each algorithm are analysed and compared with the computational requirement of the method of matched asymptotic expansions. Approximate solutions to a linear and a non-linear problem are obtained by each method and compared.
Zörnig, Peter
2015-08-01
We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.
A New Bound for the Ration Between the 2-Matching Problem and Its Linear Programming Relaxation
Boyd, Sylvia; Carr, Robert
1999-07-28
Consider the 2-matching problem defined on the complete graph, with edge costs which satisfy the triangle inequality. We prove that the value of a minimum cost 2-matching is bounded above by 4/3 times the value of its linear programming relaxation, the fractional 2-matching problem. This lends credibility to a long-standing conjecture that the optimal value for the traveling salesman problem is bounded above by 4/3 times the value of its linear programming relaxation, the subtour elimination problem.
NASA Astrophysics Data System (ADS)
Zhadan, V. G.
2016-07-01
The linear semidefinite programming problem is considered. The dual affine scaling method in which all current iterations belong to the feasible set is proposed for its solution. Moreover, the boundaries of the feasible set may be reached. This method is a generalization of a version of the affine scaling method that was earlier developed for linear programs to the case of semidefinite programming.
EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.
ERIC Educational Resources Information Center
Jarvis, John J.; And Others
Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…
Evaluation of linear solvers for oil reservoir simulation problems. Part 2: The fully implicit case
Joubert, W.; Janardhan, R.
1997-12-01
A previous paper [Joubert/Biswas 1997] contained investigations of linear solver performance for matrices arising from Amoco`s Falcon parallel oil reservoir simulation code using the IMPES formulation (implicit pressure, explicit saturation). In this companion paper, similar issues are explored for linear solvers applied to matrices arising from more difficult fully implicit problems. The results of numerical experiments are given.
Bramble, J.H.; Pasciak, J.E.
1981-01-01
The linearized scalar potential formulation of the magnetostatic field problem is considered. The approach involves a reformulation of the continuous problem as a parametric boundary problem. By the introduction of a spherical interface and the use of spherical harmonics, the infinite boundary condition can also be satisfied in the parametric framework. The reformulated problem is discretized by finite element techniques and a discrete parametric problem is solved by conjugate gradient iteration. This approach decouples the problem in that only standard Neumann type elliptic finite element systems on separate bounded domains need be solved. The boundary conditions at infinity and the interface conditions are satisfied during the boundary parametric iteration.
Xu, Andrew Wei
2010-09-01
In genome rearrangement, given a set of genomes G and a distance measure d, the median problem asks for another genome q that minimizes the total distance [Formula: see text]. This is a key problem in genome rearrangement based phylogenetic analysis. Although this problem is known to be NP-hard, we have shown in a previous article, on circular genomes and under the DCJ distance measure, that a family of patterns in the given genomes--represented by adequate subgraphs--allow us to rapidly find exact solutions to the median problem in a decomposition approach. In this article, we extend this result to the case of linear multichromosomal genomes, in order to solve more interesting problems on eukaryotic nuclear genomes. A multi-way capping problem in the linear multichromosomal case imposes an extra computational challenge on top of the difficulty in the circular case, and this difficulty has been underestimated in our previous study and is addressed in this article. We represent the median problem by the capped multiple breakpoint graph, extend the adequate subgraphs into the capped adequate subgraphs, and prove optimality-preserving decomposition theorems, which give us the tools to solve the median problem and the multi-way capping optimization problem together. We also develop an exact algorithm ASMedian-linear, which iteratively detects instances of (capped) adequate subgraphs and decomposes problems into subproblems. Tested on simulated data, ASMedian-linear can rapidly solve most problems with up to several thousand genes, and it also can provide optimal or near-optimal solutions to the median problem under the reversal/HP distance measures. ASMedian-linear is available at http://sites.google.com/site/andrewweixu .
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1986-01-01
An abstract approximation framework is developed for the finite and infinite time horizon discrete-time linear-quadratic regulator problem for systems whose state dynamics are described by a linear semigroup of operators on an infinite dimensional Hilbert space. The schemes included the framework yield finite dimensional approximations to the linear state feedback gains which determine the optimal control law. Convergence arguments are given. Examples involving hereditary and parabolic systems and the vibration of a flexible beam are considered. Spline-based finite element schemes for these classes of problems, together with numerical results, are presented and discussed.
Newton's method for large bound-constrained optimization problems.
Lin, C.-J.; More, J. J.; Mathematics and Computer Science
1999-01-01
We analyze a trust region version of Newton's method for bound-constrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearly constrained problems and yields global and superlinear convergence without assuming either strict complementarity or linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large bound-constrained problems.
The synthesis of optimal controls for linear, time-optimal problems with retarded controls.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Jacobs, M. Q.; Latina, M. R.
1971-01-01
Optimization problems involving linear systems with retardations in the controls are studied in a systematic way. Some physical motivation for the problems is discussed. The topics covered are: controllability, existence and uniqueness of the optimal control, sufficient conditions, techniques of synthesis, and dynamic programming. A number of solved examples are presented.
Illusion of Linearity in Geometry: Effect in Multiple-Choice Problems
ERIC Educational Resources Information Center
Vlahovic-Stetic, Vesna; Pavlin-Bernardic, Nina; Rajter, Miroslav
2010-01-01
The aim of this study was to examine if there is a difference in the performance on non-linear problems regarding age, gender, and solving situation, and whether the multiple-choice answer format influences students' thinking. A total of 112 students, aged 15-16 and 18-19, were asked to solve problems for which solutions based on proportionality…
ERIC Educational Resources Information Center
Acevedo Nistal, Ana; Van Dooren, Wim; Verschaffel, Lieven
2013-01-01
Thirty-six secondary school students aged 14-16 were interviewed while they chose between a table, a graph or a formula to solve three linear function problems. The justifications for their choices were classified as (1) task-related if they explicitly mentioned the to-be-solved problem, (2) subject-related if students mentioned their own…
Yu, Guoshen; Sapiro, Guillermo; Mallat, Stéphane
2012-05-01
A general framework for solving image inverse problems with piecewise linear estimations is introduced in this paper. The approach is based on Gaussian mixture models, which are estimated via a maximum a posteriori expectation-maximization algorithm. A dual mathematical interpretation of the proposed framework with a structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared with traditional sparse inverse problem techniques. We demonstrate that, in a number of image inverse problems, including interpolation, zooming, and deblurring of narrow kernels, the same simple and computationally efficient algorithm yields results in the same ballpark as that of the state of the art.
Digital program for solving the linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, B.
1975-01-01
A computer program is described which solves the linear stochastic optimal control and estimation (LSOCE) problem by using a time-domain formulation. The LSOCE problem is defined as that of designing controls for a linear time-invariant system which is disturbed by white noise in such a way as to minimize a performance index which is quadratic in state and control variables. The LSOCE problem and solution are outlined; brief descriptions are given of the solution algorithms, and complete descriptions of each subroutine, including usage information and digital listings, are provided. A test case is included, as well as information on the IBM 7090-7094 DCS time and storage requirements.
On high-continuity transfinite element formulations for linear-nonlinear transient thermal problems
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Railkar, Sudhir B.
1987-01-01
This paper describes recent developments in the applicability of a hybrid transfinite element methodology with emphasis on high-continuity formulations for linear/nonlinear transient thermal problems. The proposed concepts furnish accurate temperature distributions and temperature gradients making use of a relatively smaller number of degrees of freedom; and the methodology is applicable to linear/nonlinear thermal problems. Characteristic features of the formulations are described in technical detail as the proposed hybrid approach combines the major advantages and modeling features of high-continuity thermal finite elements in conjunction with transform methods and classical Galerkin schemes. Several numerical test problems are evaluated and the results obtained validate the proposed concepts for linear/nonlinear thermal problems.
Some comparison of restarted GMRES and QMR for linear and nonlinear problems
Morgan, R.; Joubert, W.
1994-12-31
Comparisons are made between the following methods: QMR including its transpose-free version, restarted GMRES, and a modified restarted GMRES that uses approximate eigenvectors to improve convergence, For some problems, the modified GMRES is competitive with or better than QMR in terms of the number of matrix-vector products. Also, the GMRES methods can be much better when several similar systems of linear equations must be solved, as in the case of nonlinear problems and ODE problems.
Upper error bounds on calculated outputs of interest for linear and nonlinear structural problems
NASA Astrophysics Data System (ADS)
Ladevèze, Pierre
2006-07-01
This Note introduces new strict upper error bounds on outputs of interest for linear as well as time-dependent nonlinear structural problems calculated by the finite element method. Small-displacement problems without softening, such as (visco)plasticity problems, are included through the standard thermodynamics framework involving internal state variables. To cite this article: P. Ladevèze, C. R. Mecanique 334 (2006).
Initial-value problem for a linear ordinary differential equation of noninteger order
Pskhu, Arsen V
2011-04-30
An initial-value problem for a linear ordinary differential equation of noninteger order with Riemann-Liouville derivatives is stated and solved. The initial conditions of the problem ensure that (by contrast with the Cauchy problem) it is uniquely solvable for an arbitrary set of parameters specifying the orders of the derivatives involved in the equation; these conditions are necessary for the equation under consideration. The problem is reduced to an integral equation; an explicit representation of the solution in terms of the Wright function is constructed. As a consequence of these results, necessary and sufficient conditions for the solvability of the Cauchy problem are obtained. Bibliography: 7 titles.
A new neural network model for solving random interval linear programming problems.
Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza
2017-05-01
This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique.
Averaging and Linear Programming in Some Singularly Perturbed Problems of Optimal Control
Gaitsgory, Vladimir; Rossomakhine, Sergey
2015-04-15
The paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed (SP) optimal controls problems (that is, problems of optimal control of SP systems) considered on the infinite time horizon. We mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well. Our consideration is based on earlier results on averaging of SP control systems and on linear programming formulations of optimal control problems. The idea that we exploit is to first asymptotically approximate a given problem of optimal control of the SP system by a certain averaged optimal control problem, then reformulate this averaged problem as an infinite-dimensional linear programming (LP) problem, and then approximate the latter by semi-infinite LP problems. We show that the optimal solution of these semi-infinite LP problems and their duals (that can be found with the help of a modification of an available LP software) allow one to construct near optimal controls of the SP system. We demonstrate the construction with two numerical examples.
A strictly improving linear programming alorithm based on a series of Phase 1 problems
Leichner, S.A.; Dantzig, G.B.; Davis, J.W.
1992-04-01
When used on degenerate problems, the simplex method often takes a number of degenerate steps at a particular vertex before moving to the next. In theory (although rarely in practice), the simplex method can actually cycle at such a degenerate point. Instead of trying to modify the simplex method to avoid degenerate steps, we have developed a new linear programming algorithm that is completely impervious to degeneracy. This new method solves the Phase II problem of finding an optimal solution by solving a series of Phase I feasibility problems. Strict improvement is attained at each iteration in the Phase I algorithm, and the Phase II sequence of feasibility problems has linear convergence in the number of Phase I problems. When tested on the 30 smallest NETLIB linear programming test problems, the computational results for the new Phase II algorithm were over 15% faster than the simplex method; on some problems, it was almost two times faster, and on one problem it was four times faster.
Cichocki, A; Unbehauen, R
1994-01-01
In this paper a new class of simplified low-cost analog artificial neural networks with on chip adaptive learning algorithms are proposed for solving linear systems of algebraic equations in real time. The proposed learning algorithms for linear least squares (LS), total least squares (TLS) and data least squares (DLS) problems can be considered as modifications and extensions of well known algorithms: the row-action projection-Kaczmarz algorithm and/or the LMS (Adaline) Widrow-Hoff algorithms. The algorithms can be applied to any problem which can be formulated as a linear regression problem. The correctness and high performance of the proposed neural networks are illustrated by extensive computer simulation results.
Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem
Yoo, Jaechil
1996-12-31
Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.
On Development of a Problem Based Learning System for Linear Algebra with Simple Input Method
NASA Astrophysics Data System (ADS)
Yokota, Hisashi
2011-08-01
Learning how to express a matrix using a keyboard inputs requires a lot of time for most of college students. Therefore, for a problem based learning system for linear algebra to be accessible for college students, it is inevitable to develop a simple method for expressing matrices. Studying the two most widely used input methods for expressing matrices, a simpler input method for expressing matrices is obtained. Furthermore, using this input method and educator's knowledge structure as a concept map, a problem based learning system for linear algebra which is capable of assessing students' knowledge structure and skill is developed.
Role of complementarity in superdense coding
NASA Astrophysics Data System (ADS)
Coles, Patrick J.
2013-12-01
The complementarity of two observables is often captured in uncertainty relations, which quantify an inevitable trade-off in knowledge. Here we study complementarity in the context of an information-processing task: we link the complementarity of two observables to their usefulness for superdense coding (SDC). In SDC, Alice sends two classical dits of information to Bob by sending a single qudit. However, we show that encoding with commuting unitaries prevents Alice from sending more than one dit per qudit, implying that complementarity is necessary for SDC to be advantageous over a classical strategy for information transmission. When Alice encodes with products of Pauli operators for the X and Z bases, we quantify the complementarity of these encodings in terms of the overlap of the X and Z basis elements. Our main result explicitly solves for the SDC capacity as a function of the complementarity, showing that the entropy of the overlap matrix gives the capacity, when the preshared state is maximally entangled. We generalize this equation to resources with symmetric noise such as a preshared Werner state. In the most general case of arbitrary noisy resources, we obtain an analogous lower bound on the SDC capacity. Our results shed light on the role of complementarity in determining the quantum advantage in SDC and also seem fundamentally interesting since they bear a striking resemblance to uncertainty relations.
A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints
NASA Technical Reports Server (NTRS)
Hanson, R. J.; Krogh, Fred T.
1992-01-01
A new algorithm for solving nonlinear least-squares and nonlinear equation problems is proposed which is based on approximating the nonlinear functions using the quadratic-tensor model by Schnabel and Frank. The algorithm uses a trust region defined by a box containing the current values of the unknowns. The algorithm is found to be effective for problems with linear constraints and dense Jacobian matrices.
Observation of complementarity in the macroscopic domain
Cao Dezhong; Xiong Jun; Tang Hua; Lin Lufang; Zhang Suheng; Wang Kaige
2007-09-15
Complementarity is usually considered as a phenomenon of microscopic systems. In this paper, we report an experimental observation of complementarity in correlated double-slit interference with a pseudothermal light source. The thermal light beam is divided into test and reference beams which are correlated with each other. The double slit is set in the test arm, and an interference pattern can be observed in the intensity correlation between the two arms. The experimental results show that the disappearance of the interference fringe depends on whether which-path information is gained through the reference arm. The experiment therefore shows complementarity occurring in the macroscopic domain.
Reintroducing the Concept of Complementarity into Psychology.
Wang, Zheng; Busemeyer, Jerome
2015-01-01
Central to quantum theory is the concept of complementarity. In this essay, we argue that complementarity is also central to the emerging field of quantum cognition. We review the concept, its historical roots in psychology, and its development in quantum physics and offer examples of how it can be used to understand human cognition. The concept of complementarity provides a valuable and fresh perspective for organizing human cognitive phenomena and for understanding the nature of measurements in psychology. In turn, psychology can provide valuable new evidence and theoretical ideas to enrich this important scientific concept.
Reintroducing the Concept of Complementarity into Psychology
Wang, Zheng; Busemeyer, Jerome
2015-01-01
Central to quantum theory is the concept of complementarity. In this essay, we argue that complementarity is also central to the emerging field of quantum cognition. We review the concept, its historical roots in psychology, and its development in quantum physics and offer examples of how it can be used to understand human cognition. The concept of complementarity provides a valuable and fresh perspective for organizing human cognitive phenomena and for understanding the nature of measurements in psychology. In turn, psychology can provide valuable new evidence and theoretical ideas to enrich this important scientific concept. PMID:26640454
Linear Integro-differential Schroedinger and Plate Problems Without Initial Conditions
Lorenzi, Alfredo
2013-06-15
Via Carleman's estimates we prove uniqueness and continuous dependence results for the temporal traces of solutions to overdetermined linear ill-posed problems related to Schroedinger and plate equation. The overdetermination is prescribed in an open subset of the (space-time) lateral boundary.
Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.
ERIC Educational Resources Information Center
Shama, Gilli; Dreyfus, Tommy
1994-01-01
Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…
High Order Finite Difference Methods, Multidimensional Linear Problems and Curvilinear Coordinates
NASA Technical Reports Server (NTRS)
Nordstrom, Jan; Carpenter, Mark H.
1999-01-01
Boundary and interface conditions are derived for high order finite difference methods applied to multidimensional linear problems in curvilinear coordinates. The boundary and interface conditions lead to conservative schemes and strict and strong stability provided that certain metric conditions are met.
Bohrian Complementarity in the Light of Kantian Teleology
NASA Astrophysics Data System (ADS)
Pringe, Hernán
2014-03-01
The Kantian influences on Bohr's thought and the relationship between the perspective of complementarity in physics and in biology seem at first sight completely unrelated issues. However, the goal of this work is to show their intimate connection. We shall see that Bohr's views on biology shed light on Kantian elements of his thought, which enables a better understanding of his complementary interpretation of quantum theory. For this purpose, we shall begin by discussing Bohr's views on the analogies concerning the epistemological situation in biology and in physics. Later, we shall compare the Bohrian and the Kantian approaches to the science of life in order to show their close connection. On this basis, we shall finally turn to the issue of complementarity in quantum theory in order to assess what we can learn about the epistemological problems in the quantum realm from a consideration of Kant's views on teleology.
A general algorithm for control problems with variable parameters and quasi-linear models
NASA Astrophysics Data System (ADS)
Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.
2015-12-01
This paper presents an algorithm that is able to solve optimal control problems in which the modelling of the system contains variable parameters, with the added complication that, in certain cases, these parameters can lead to control problems governed by quasi-linear equations. Combining the techniques of Pontryagin's Maximum Principle and the shooting method, an algorithm has been developed that is not affected by the values of the parameters, being able to solve conventional problems as well as cases in which the optimal solution is shown to be bang-bang with singular arcs.
Geometric tools for solving the FDI problem for linear periodic discrete-time systems
NASA Astrophysics Data System (ADS)
Longhi, Sauro; Monteriù, Andrea
2013-07-01
This paper studies the problem of detecting and isolating faults in linear periodic discrete-time systems. The aim is to design an observer-based residual generator where each residual is sensitive to one fault, whilst remaining insensitive to the other faults that can affect the system. Making use of the geometric tools, and in particular of the outer observable subspace notion, the Fault Detection and Isolation (FDI) problem is formulated and necessary and solvability conditions are given. An algorithmic procedure is described to determine the solution of the FDI problem.
The Tricomi problem of a quasi-linear Lavrentiev-Bitsadze mixed type equation
NASA Astrophysics Data System (ADS)
Shuxing, Chen; Zhenguo, Feng
2013-06-01
In this paper, we consider the Tricomi problem of a quasi-linear Lavrentiev-Bitsadze mixed type equation begin{array}{lll}(sgn u_y) {partial ^2 u/partial x^2} + {partial ^2 u/partial y^2}-1=0, whose coefficients depend on the first-order derivative of the unknown function. We prove the existence of solution to this problem by using the hodograph transformation. The method can be applied to study more difficult problems for nonlinear mixed type equations arising in gas dynamics.
NASA Technical Reports Server (NTRS)
Kent, James; Holdaway, Daniel
2015-01-01
A number of geophysical applications require the use of the linearized version of the full model. One such example is in numerical weather prediction, where the tangent linear and adjoint versions of the atmospheric model are required for the 4DVAR inverse problem. The part of the model that represents the resolved scale processes of the atmosphere is known as the dynamical core. Advection, or transport, is performed by the dynamical core. It is a central process in many geophysical applications and is a process that often has a quasi-linear underlying behavior. However, over the decades since the advent of numerical modelling, significant effort has gone into developing many flavors of high-order, shape preserving, nonoscillatory, positive definite advection schemes. These schemes are excellent in terms of transporting the quantities of interest in the dynamical core, but they introduce nonlinearity through the use of nonlinear limiters. The linearity of the transport schemes used in Goddard Earth Observing System version 5 (GEOS-5), as well as a number of other schemes, is analyzed using a simple 1D setup. The linearized version of GEOS-5 is then tested using a linear third order scheme in the tangent linear version.
The Kantian framework of complementarity
NASA Astrophysics Data System (ADS)
Cuffaro, Michael
A growing number of commentators have, in recent years, noted the important affinities in the views of Immanuel Kant and Niels Bohr. While these commentators are correct, the picture they present of the connections between Bohr and Kant is painted in broad strokes; it is open to the criticism that these affinities are merely superficial. In this essay, I provide a closer, structural, analysis of both Bohr's and Kant's views that makes these connections more explicit. In particular, I demonstrate the similarities between Bohr's argument, on the one hand, that neither the wave nor the particle description of atomic phenomena pick out an object in the ordinary sense of the word, and Kant's requirement, on the other hand, that both 'mathematical' (having to do with magnitude) and 'dynamical' (having to do with an object's interaction with other objects) principles must be applicable to appearances in order for us to determine them as objects of experience. I argue that Bohr's 'complementarity interpretation' of quantum mechanics, which views atomic objects as idealizations, and which licenses the repeal of the principle of causality for the domain of atomic physics, is perfectly compatible with, and indeed follows naturally from a broadly Kantian epistemological framework.
Voila: A visual object-oriented iterative linear algebra problem solving environment
Edwards, H.C.; Hayes, L.J.
1994-12-31
Application of iterative methods to solve a large linear system of equations currently involves writing a program which calls iterative method subprograms from a large software package. These subprograms have complex interfaces which are difficult to use and even more difficult to program. A problem solving environment specifically tailored to the development and application of iterative methods is needed. This need will be fulfilled by Voila, a problem solving environment which provides a visual programming interface to object-oriented iterative linear algebra kernels. Voila will provide several quantum improvements over current iterative method problem solving environments. First, programming and applying iterative methods is considerably simplified through Voila`s visual programming interface. Second, iterative method algorithm implementations are independent of any particular sparse matrix data structure through Voila`s object-oriented kernels. Third, the compile-link-debug process is eliminated as Voila operates as an interpreter.
The linearized characteristics method and its application to practical nonlinear supersonic problems
NASA Technical Reports Server (NTRS)
Ferri, Antonio
1952-01-01
The methods of characteristics has been linearized by assuming that the flow field can be represented as a basic flow field determined by nonlinearized methods and a linearized superposed flow field that accounts for small changes of boundary conditions. The method has been applied to two-dimensional rotational flow where the basic flow is potential flow and to axially symmetric problems where conical flows have been used as the basic flows. In both cases the method allows the determination of the flow field to be simplified and the numerical work to be reduced to a few calculations. The calculations of axially symmetric flow can be simplified if tabulated values of some coefficients of the conical flow are obtained. The method has also been applied to slender bodies without symmetry and to some three-dimensional wing problems where two-dimensional flow can be used as the basic flow. Both problems were unsolved before in the approximation of nonlinear flow.
Stable computation of search directions for near-degenerate linear programming problems
Hough, P.D.
1997-03-01
In this paper, we examine stability issues that arise when computing search directions ({delta}x, {delta}y, {delta} s) for a primal-dual path-following interior point method for linear programming. The dual step {delta}y can be obtained by solving a weighted least-squares problem for which the weight matrix becomes extremely il conditioned near the boundary of the feasible region. Hough and Vavisis proposed using a type of complete orthogonal decomposition (the COD algorithm) to solve such a problem and presented stability results. The work presented here addresses the stable computation of the primal step {delta}x and the change in the dual slacks {delta}s. These directions can be obtained in a straight-forward manner, but near-degeneracy in the linear programming instance introduces ill-conditioning which can cause numerical problems in this approach. Therefore, we propose a new method of computing {delta}x and {delta}s. More specifically, this paper describes and orthogonal projection algorithm that extends the COD method. Unlike other algorithms, this method is stable for interior point methods without assuming nondegeneracy in the linear programming instance. Thus, it is more general than other algorithms on near-degenerate problems.
Well-posedness of the time-varying linear electromagnetic initial-boundary value problem
NASA Astrophysics Data System (ADS)
Xie, Li; Lei, Yin-Zhao
2007-09-01
The well-posedness of the initial-boundary value problem of the time-varying linear electromagnetic field in a multi-medium region is investigated. Function spaces are defined, with Faraday's law of electromagnetic induction and the initial-boundary conditions considered as constraints. Gauss's formula applied to a multi-medium region is used to derive the energy-estimating inequality. After converting the initial-boundary conditions into homogeneous ones and analysing the characteristics of an operator introduced according to the total current law, the existence, uniqueness and stability of the weak solution to the initial-boundary value problem of the time-varying linear electromagnetic field are proved.
NASA Astrophysics Data System (ADS)
Vasant, P.; Ganesan, T.; Elamvazuthi, I.
2012-11-01
A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.
NASA Astrophysics Data System (ADS)
Perrone, Antonio L.; Basti, Gianfranco
1995-04-01
With respect to Rosenblatt linear perceptron, two classical limitation theorems demonstrated by M. Minsky and S. Papert are discussed. These two theorems, `(Psi) One-in-a-box' and `(Psi) Parity,' ultimately concern the intrinsic limitations of parallel calculations in pattern recognition problems. We demonstrate a possible solution of these limitation problems by substituting the static definition of characteristic functions and of their domains in the `geometrical' perceptron, with their dynamic definition. This dynamic consists in the mutual redefinition of the characteristic function and of its domain depending on the matching with the input.
Observations on the linear programming formulation of the single reflector design problem.
Canavesi, Cristina; Cassarly, William J; Rolland, Jannick P
2012-02-13
We implemented the linear programming approach proposed by Oliker and by Wang to solve the single reflector problem for a point source and a far-field target. The algorithm was shown to produce solutions that aim the input rays at the intersections between neighboring reflectors. This feature makes it possible to obtain the same reflector with a low number of rays - of the order of the number of targets - as with a high number of rays, greatly reducing the computation complexity of the problem.
Solution of second order quasi-linear boundary value problems by a wavelet method
Zhang, Lei; Zhou, Youhe; Wang, Jizeng
2015-03-10
A wavelet Galerkin method based on expansions of Coiflet-like scaling function bases is applied to solve second order quasi-linear boundary value problems which represent a class of typical nonlinear differential equations. Two types of typical engineering problems are selected as test examples: one is about nonlinear heat conduction and the other is on bending of elastic beams. Numerical results are obtained by the proposed wavelet method. Through comparing to relevant analytical solutions as well as solutions obtained by other methods, we find that the method shows better efficiency and accuracy than several others, and the rate of convergence can even reach orders of 5.8.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1987-01-01
In the optimal linear quadratic regulator problem for finite dimensional systems, the method known as an alpha-shift can be used to produce a closed-loop system whose spectrum lies to the left of some specified vertical line; that is, a closed-loop system with a prescribed degree of stability. This paper treats the extension of the alpha-shift to hereditary systems. As infinite dimensions, the shift can be accomplished by adding alpha times the identity to the open-loop semigroup generator and then solving an optimal regulator problem. However, this approach does not work with a new approximation scheme for hereditary control problems recently developed by Kappel and Salamon. Since this scheme is among the best to date for the numerical solution of the linear regulator problem for hereditary systems, an alternative method for shifting the closed-loop spectrum is needed. An alpha-shift technique that can be used with the Kappel-Salamon approximation scheme is developed. Both the continuous-time and discrete-time problems are considered. A numerical example which demonstrates the feasibility of the method is included.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1985-01-01
In the optimal linear quadratic regulator problem for finite dimensional systems, the method known as an alpha-shift can be used to produce a closed-loop system whose spectrum lies to the left of some specified vertical line; that is, a closed-loop system with a prescribed degree of stability. This paper treats the extension of the alpha-shift to hereditary systems. As infinite dimensions, the shift can be accomplished by adding alpha times the identity to the open-loop semigroup generator and then solving an optimal regulator problem. However, this approach does not work with a new approximation scheme for hereditary control problems recently developed by Kappel and Salamon. Since this scheme is among the best to date for the numerical solution of the linear regulator problem for hereditary systems, an alternative method for shifting the closed-loop spectrum is needed. An alpha-shift technique that can be used with the Kappel-Salamon approximation scheme is developed. Both the continuous-time and discrete-time problems are considered. A numerical example which demonstrates the feasibility of the method is included.
Lorber, A.A.; Carey, G.F.; Bova, S.W.; Harle, C.H.
1996-12-31
The connection between the solution of linear systems of equations by iterative methods and explicit time stepping techniques is used to accelerate to steady state the solution of ODE systems arising from discretized PDEs which may involve either physical or artificial transient terms. Specifically, a class of Runge-Kutta (RK) time integration schemes with extended stability domains has been used to develop recursion formulas which lead to accelerated iterative performance. The coefficients for the RK schemes are chosen based on the theory of Chebyshev iteration polynomials in conjunction with a local linear stability analysis. We refer to these schemes as Chebyshev Parameterized Runge Kutta (CPRK) methods. CPRK methods of one to four stages are derived as functions of the parameters which describe an ellipse {Epsilon} which the stability domain of the methods is known to contain. Of particular interest are two-stage, first-order CPRK and four-stage, first-order methods. It is found that the former method can be identified with any two-stage RK method through the correct choice of parameters. The latter method is found to have a wide range of stability domains, with a maximum extension of 32 along the real axis. Recursion performance results are presented below for a model linear convection-diffusion problem as well as non-linear fluid flow problems discretized by both finite-difference and finite-element methods.
The solution of the optimization problem of small energy complexes using linear programming methods
NASA Astrophysics Data System (ADS)
Ivanin, O. A.; Director, L. B.
2016-11-01
Linear programming methods were used for solving the optimization problem of schemes and operation modes of distributed generation energy complexes. Applicability conditions of simplex method, applied to energy complexes, including installations of renewable energy (solar, wind), diesel-generators and energy storage, considered. The analysis of decomposition algorithms for various schemes of energy complexes was made. The results of optimization calculations for energy complexes, operated autonomously and as a part of distribution grid, are presented.
A Conforming Multigrid Method for the Pure Traction Problem of Linear Elasticity: Mixed Formulation
NASA Technical Reports Server (NTRS)
Lee, Chang-Ock
1996-01-01
A multigrid method using conforming P-1 finite element is developed for the two-dimensional pure traction boundary value problem of linear elasticity. The convergence is uniform even as the material becomes nearly incompressible. A heuristic argument for acceleration of the multigrid method is discussed as well. Numerical results with and without this acceleration as well as performance estimates on a parallel computer are included.
Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report
Saad, Yousef
2014-01-16
The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the
Scilab software as an alternative low-cost computing in solving the linear equations problem
NASA Astrophysics Data System (ADS)
Agus, Fahrul; Haviluddin
2017-02-01
Numerical computation packages are widely used both in teaching and research. These packages consist of license (proprietary) and open source software (non-proprietary). One of the reasons to use the package is a complexity of mathematics function (i.e., linear problems). Also, number of variables in a linear or non-linear function has been increased. The aim of this paper was to reflect on key aspects related to the method, didactics and creative praxis in the teaching of linear equations in higher education. If implemented, it could be contribute to a better learning in mathematics area (i.e., solving simultaneous linear equations) that essential for future engineers. The focus of this study was to introduce an additional numerical computation package of Scilab as an alternative low-cost computing programming. In this paper, Scilab software was proposed some activities that related to the mathematical models. In this experiment, four numerical methods such as Gaussian Elimination, Gauss-Jordan, Inverse Matrix, and Lower-Upper Decomposition (LU) have been implemented. The results of this study showed that a routine or procedure in numerical methods have been created and explored by using Scilab procedures. Then, the routine of numerical method that could be as a teaching material course has exploited.
A new gradient-based neural network for solving linear and quadratic programming problems.
Leung, Y; Chen, K Z; Jiao, Y C; Gao, X B; Leung, K S
2001-01-01
A new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality constraints are properly handled. The simulation results show that the proposed neural network is feasible and efficient.
Arbitrary Lagrangian-Eulerian method for non-linear problems of geomechanics
NASA Astrophysics Data System (ADS)
Nazem, M.; Carter, J. P.; Airey, D. W.
2010-06-01
In many geotechnical problems it is vital to consider the geometrical non-linearity caused by large deformation in order to capture a more realistic model of the true behaviour. The solutions so obtained should then be more accurate and reliable, which should ultimately lead to cheaper and safer design. The Arbitrary Lagrangian-Eulerian (ALE) method originated from fluid mechanics, but has now been well established for solving large deformation problems in geomechanics. This paper provides an overview of the ALE method and its challenges in tackling problems involving non-linearities due to material behaviour, large deformation, changing boundary conditions and time-dependency, including material rate effects and inertia effects in dynamic loading applications. Important aspects of ALE implementation into a finite element framework will also be discussed. This method is then employed to solve some interesting and challenging geotechnical problems such as the dynamic bearing capacity of footings on soft soils, consolidation of a soil layer under a footing, and the modelling of dynamic penetration of objects into soil layers.
Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems
Choi, Sou-Cheng T.; Saunders, Michael A.
2014-01-01
We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP. PMID:25328255
A Linear Time Algorithm for the Minimum Spanning Caterpillar Problem for Bounded Treewidth Graphs
NASA Astrophysics Data System (ADS)
Dinneen, Michael J.; Khosravani, Masoud
We consider the Minimum Spanning Caterpillar Problem (MSCP) in a graph where each edge has two costs, spine (path) cost and leaf cost, depending on whether it is used as a spine or a leaf edge. The goal is to find a spanning caterpillar in which the sum of its edge costs is the minimum. We show that the problem has a linear time algorithm when a tree decomposition of the graph is given as part of the input. Despite the fast growing constant factor of the time complexity of our algorithm, it is still practical and efficient for some classes of graphs, such as outerplanar, series-parallel (K 4 minor-free), and Halin graphs. We also briefly explain how one can modify our algorithm to solve the Minimum Spanning Ring Star and the Dual Cost Minimum Spanning Tree Problems.
IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1994-01-01
IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.
Boundary parametric approximation to the linearized scalar potential magnetostatic field problem
Bramble, J.H.; Pasciak, J.E.
1984-01-01
We consider the linearized scalar potential formulation of the magnetostatic field problem in this paper. Our approach involves a reformulation of the continuous problem as a parametric boundary problem. By the introduction of a spherical interface and the use of spherical harmonics, the infinite boundary conditions can also be satisfied in the parametric framework. That is, the field in the exterior of a sphere is expanded in a harmonic series of eigenfunctions for the exterior harmonic problem. The approach is essentially a finite element method coupled with a spectral method via a boundary parametric procedure. The reformulated problem is discretized by finite element techniques which lead to a discrete parametric problem which can be solved by well conditioned iteration involving only the solution of decoupled Neumann type elliptic finite element systems and L/sup 2/ projection onto subspaces of spherical harmonics. Error and stability estimates given show exponential convergence in the degree of the spherical harmonics and optimal order convergence with respect to the finite element approximation for the resulting fields in L/sup 2/. 24 references.
Acceleration of multiple solution of a boundary value problem involving a linear algebraic system
NASA Astrophysics Data System (ADS)
Gazizov, Talgat R.; Kuksenko, Sergey P.; Surovtsev, Roman S.
2016-06-01
Multiple solution of a boundary value problem that involves a linear algebraic system is considered. New approach to acceleration of the solution is proposed. The approach uses the structure of the linear system matrix. Particularly, location of entries in the right columns and low rows of the matrix, which undergo variation due to the computing in the range of parameters, is used to apply block LU decomposition. Application of the approach is considered on the example of multiple computing of the capacitance matrix by method of moments used in numerical electromagnetics. Expressions for analytic estimation of the acceleration are presented. Results of the numerical experiments for solution of 100 linear systems with matrix orders of 1000, 2000, 3000 and different relations of variated and constant entries of the matrix show that block LU decomposition can be effective for multiple solution of linear systems. The speed up compared to pointwise LU factorization increases (up to 15) for larger number and order of considered systems with lower number of variated entries.
Robustness in linear quadratic feedback design with application to an aircraft control problem
NASA Technical Reports Server (NTRS)
Patel, R. V.; Sridhar, B.; Toda, M.
1977-01-01
Some new results concerning robustness and asymptotic properties of error bounds of a linear quadratic feedback design are applied to an aircraft control problem. An autopilot for the flare control of the Augmentor Wing Jet STOL Research Aircraft (AWJSRA) is designed based on Linear Quadratic (LQ) theory and the results developed in this paper. The variation of the error bounds to changes in the weighting matrices in the LQ design is studied by computer simulations, and appropriate weighting matrices are chosen to obtain a reasonable error bound for variations in the system matrix and at the same time meet the practical constraints for the flare maneuver of the AWJSRA. Results from the computer simulation of a satisfactory autopilot design for the flare control of the AWJSRA are presented.
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna
2016-01-01
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results. PMID:27220686
NASA Astrophysics Data System (ADS)
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna
2016-05-01
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.
A method of fast, sequential experimental design for linearized geophysical inverse problems
NASA Astrophysics Data System (ADS)
Coles, Darrell A.; Morgan, Frank Dale
2009-07-01
An algorithm for linear(ized) experimental design is developed for a determinant-based design objective function. This objective function is common in design theory and is used to design experiments that minimize the model entropy, a measure of posterior model uncertainty. Of primary significance in design problems is computational expediency. Several earlier papers have focused attention on posing design objective functions and opted to use global search methods for finding the critical points of these functions, but these algorithms are too slow to be practical. The proposed technique is distinguished primarily for its computational efficiency, which derives partly from a greedy optimization approach, termed sequential design. Computational efficiency is further enhanced through formulae for updating determinants and matrix inverses without need for direct calculation. The design approach is orders of magnitude faster than a genetic algorithm applied to the same design problem. However, greedy optimization often trades global optimality for increased computational speed; the ramifications of this tradeoff are discussed. The design methodology is demonstrated on a simple, single-borehole DC electrical resistivity problem. Designed surveys are compared with random and standard surveys, both with and without prior information. All surveys were compared with respect to a `relative quality' measure, the post-inversion model per cent rms error. The issue of design for inherently ill-posed inverse problems is considered and an approach for circumventing such problems is proposed. The design algorithm is also applied in an adaptive manner, with excellent results suggesting that smart, compact experiments can be designed in real time.
Using Perturbed QR Factorizations To Solve Linear Least-Squares Problems
Avron, Haim; Ng, Esmond G.; Toledo, Sivan
2008-03-21
We propose and analyze a new tool to help solve sparse linear least-squares problems min{sub x} {parallel}Ax-b{parallel}{sub 2}. Our method is based on a sparse QR factorization of a low-rank perturbation {cflx A} of A. More precisely, we show that the R factor of {cflx A} is an effective preconditioner for the least-squares problem min{sub x} {parallel}Ax-b{parallel}{sub 2}, when solved using LSQR. We propose applications for the new technique. When A is rank deficient we can add rows to ensure that the preconditioner is well-conditioned without column pivoting. When A is sparse except for a few dense rows we can drop these dense rows from A to obtain {cflx A}. Another application is solving an updated or downdated problem. If R is a good preconditioner for the original problem A, it is a good preconditioner for the updated/downdated problem {cflx A}. We can also solve what-if scenarios, where we want to find the solution if a column of the original matrix is changed/removed. We present a spectral theory that analyzes the generalized spectrum of the pencil (A*A,R*R) and analyze the applications.
NASA Astrophysics Data System (ADS)
Tang, Yao-Zong; Li, Xiao-Lin
2017-03-01
We first give a stabilized improved moving least squares (IMLS) approximation, which has better computational stability and precision than the IMLS approximation. Then, analysis of the improved element-free Galerkin method is provided theoretically for both linear and nonlinear elliptic boundary value problems. Finally, numerical examples are given to verify the theoretical analysis. Project supported by the National Natural Science Foundation of China (Grant No. 11471063), the Chongqing Research Program of Basic Research and Frontier Technology, China (Grant No. cstc2015jcyjBX0083), and the Educational Commission Foundation of Chongqing City, China (Grant No. KJ1600330).
Fredholm alternative for periodic-Dirichlet problems for linear hyperbolic systems
NASA Astrophysics Data System (ADS)
Kmit, Irina; Recke, Lutz
2007-11-01
This paper concerns hyperbolic systems of two linear first-order PDEs in one space dimension with periodicity conditions in time and reflection boundary conditions in space. The coefficients of the PDEs are supposed to be time independent, but allowed to be discontinuous with respect to the space variable. We construct two scales of Banach spaces (for the solutions and for the right-hand sides of the equations, respectively) such that the problem can be modeled by means of Fredholm operators of index zero between corresponding spaces of the two scales.
NASA Astrophysics Data System (ADS)
Wu, Jiming; Gao, Zhiming; Dai, Zihuan
2012-08-01
In this paper a stabilized discretization scheme for the heterogeneous and anisotropic diffusion problems is proposed on general, possibly nonconforming polygonal meshes. The unknowns are the values at the cell center and the scheme relies on linearity-preserving criterion and the use of the so-called harmonic averaging points located at the interface of heterogeneity. The stability result and error estimate both in H1 norm are obtained under quite general and standard assumptions on polygonal meshes. The experiment results on a number of different meshes show that the scheme maintains optimal convergence rates in both L2 and H1 norms.
Madrigal-González, Jaime; Ruiz-Benito, Paloma; Ratcliffe, Sophia; Calatayud, Joaquín; Kändler, Gerald; Lehtonen, Aleksi; Dahlgren, Jonas; Wirth, Christian; Zavala, Miguel A
2016-08-30
Neglecting tree size and stand structure dynamics might bias the interpretation of the diversity-productivity relationship in forests. Here we show evidence that complementarity is contingent on tree size across large-scale climatic gradients in Europe. We compiled growth data of the 14 most dominant tree species in 32,628 permanent plots covering boreal, temperate and Mediterranean forest biomes. Niche complementarity is expected to result in significant growth increments of trees surrounded by a larger proportion of functionally dissimilar neighbours. Functional dissimilarity at the tree level was assessed using four functional types: i.e. broad-leaved deciduous, broad-leaved evergreen, needle-leaved deciduous and needle-leaved evergreen. Using Linear Mixed Models we show that, complementarity effects depend on tree size along an energy availability gradient across Europe. Specifically: (i) complementarity effects at low and intermediate positions of the gradient (coldest-temperate areas) were stronger for small than for large trees; (ii) in contrast, at the upper end of the gradient (warmer regions), complementarity is more widespread in larger than smaller trees, which in turn showed negative growth responses to increased functional dissimilarity. Our findings suggest that the outcome of species mixing on stand productivity might critically depend on individual size distribution structure along gradients of environmental variation.
Madrigal-González, Jaime; Ruiz-Benito, Paloma; Ratcliffe, Sophia; Calatayud, Joaquín; Kändler, Gerald; Lehtonen, Aleksi; Dahlgren, Jonas; Wirth, Christian; Zavala, Miguel A.
2016-01-01
Neglecting tree size and stand structure dynamics might bias the interpretation of the diversity-productivity relationship in forests. Here we show evidence that complementarity is contingent on tree size across large-scale climatic gradients in Europe. We compiled growth data of the 14 most dominant tree species in 32,628 permanent plots covering boreal, temperate and Mediterranean forest biomes. Niche complementarity is expected to result in significant growth increments of trees surrounded by a larger proportion of functionally dissimilar neighbours. Functional dissimilarity at the tree level was assessed using four functional types: i.e. broad-leaved deciduous, broad-leaved evergreen, needle-leaved deciduous and needle-leaved evergreen. Using Linear Mixed Models we show that, complementarity effects depend on tree size along an energy availability gradient across Europe. Specifically: (i) complementarity effects at low and intermediate positions of the gradient (coldest-temperate areas) were stronger for small than for large trees; (ii) in contrast, at the upper end of the gradient (warmer regions), complementarity is more widespread in larger than smaller trees, which in turn showed negative growth responses to increased functional dissimilarity. Our findings suggest that the outcome of species mixing on stand productivity might critically depend on individual size distribution structure along gradients of environmental variation. PMID:27571971
NASA Astrophysics Data System (ADS)
Madrigal-González, Jaime; Ruiz-Benito, Paloma; Ratcliffe, Sophia; Calatayud, Joaquín; Kändler, Gerald; Lehtonen, Aleksi; Dahlgren, Jonas; Wirth, Christian; Zavala, Miguel A.
2016-08-01
Neglecting tree size and stand structure dynamics might bias the interpretation of the diversity-productivity relationship in forests. Here we show evidence that complementarity is contingent on tree size across large-scale climatic gradients in Europe. We compiled growth data of the 14 most dominant tree species in 32,628 permanent plots covering boreal, temperate and Mediterranean forest biomes. Niche complementarity is expected to result in significant growth increments of trees surrounded by a larger proportion of functionally dissimilar neighbours. Functional dissimilarity at the tree level was assessed using four functional types: i.e. broad-leaved deciduous, broad-leaved evergreen, needle-leaved deciduous and needle-leaved evergreen. Using Linear Mixed Models we show that, complementarity effects depend on tree size along an energy availability gradient across Europe. Specifically: (i) complementarity effects at low and intermediate positions of the gradient (coldest-temperate areas) were stronger for small than for large trees; (ii) in contrast, at the upper end of the gradient (warmer regions), complementarity is more widespread in larger than smaller trees, which in turn showed negative growth responses to increased functional dissimilarity. Our findings suggest that the outcome of species mixing on stand productivity might critically depend on individual size distribution structure along gradients of environmental variation.
A Vector Study of Linearized Supersonic Flow Applications to Nonplanar Problems
NASA Technical Reports Server (NTRS)
Martin, John C
1953-01-01
A vector study of the partial-differential equation of steady linearized supersonic flow is presented. General expressions which relate the velocity potential in the stream to the conditions on the disturbing surfaces, are derived. In connection with these general expressions the concept of the finite part of an integral is discussed. A discussion of problems dealing with planar bodies is given and the conditions for the solution to be unique are investigated. Problems concerning nonplanar systems are investigated, and methods are derived for the solution of some simple nonplanar bodies. The surface pressure distribution and the damping in roll are found for rolling tails consisting of four, six, and eight rectangular fins for the Mach number range where the region of interference between adjacent fins does not affect the fin tips.
Self-complementarity of messenger RNA's of periodic proteins
NASA Technical Reports Server (NTRS)
Ycas, M.
1973-01-01
It is shown that the mRNA's of three periodic proteins, collagen, keratin and freezing point depressing glycoproteins show a marked degree of self-complementarity. The possible origin of this self-complementarity is discussed.
Xia, Youshen; Sun, Changyin; Zheng, Wei Xing
2012-05-01
There is growing interest in solving linear L1 estimation problems for sparsity of the solution and robustness against non-Gaussian noise. This paper proposes a discrete-time neural network which can calculate large linear L1 estimation problems fast. The proposed neural network has a fixed computational step length and is proved to be globally convergent to an optimal solution. Then, the proposed neural network is efficiently applied to image restoration. Numerical results show that the proposed neural network is not only efficient in solving degenerate problems resulting from the nonunique solutions of the linear L1 estimation problems but also needs much less computational time than the related algorithms in solving both linear L1 estimation and image restoration problems.
NASA Technical Reports Server (NTRS)
Wiggins, R. A.
1972-01-01
The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.
A linear model approach for ultrasonic inverse problems with attenuation and dispersion.
Carcreff, Ewen; Bourguignon, Sébastien; Idier, Jérôme; Simon, Laurent
2014-07-01
Ultrasonic inverse problems such as spike train deconvolution, synthetic aperture focusing, or tomography attempt to reconstruct spatial properties of an object (discontinuities, delaminations, flaws, etc.) from noisy and incomplete measurements. They require an accurate description of the data acquisition process. Dealing with frequency-dependent attenuation and dispersion is therefore crucial because both phenomena modify the wave shape as the travel distance increases. In an inversion context, this paper proposes to exploit a linear model of ultrasonic data taking into account attenuation and dispersion. The propagation distance is discretized to build a finite set of radiation impulse responses. Attenuation is modeled with a frequency power law and then dispersion is computed to yield physically consistent responses. Using experimental data acquired from attenuative materials, this model outperforms the standard attenuation-free model and other models of the literature. Because of model linearity, robust estimation methods can be implemented. When matched filtering is employed for single echo detection, the model that we propose yields precise estimation of the attenuation coefficient and of the sound velocity. A thickness estimation problem is also addressed through spike deconvolution, for which the proposed model also achieves accurate results.
A stabilized complementarity formulation for nonlinear analysis of 3D bimodular materials
NASA Astrophysics Data System (ADS)
Zhang, L.; Zhang, H. W.; Wu, J.; Yan, B.
2016-06-01
Bi-modulus materials with different mechanical responses in tension and compression are often found in civil, composite, and biological engineering. Numerical analysis of bimodular materials is strongly nonlinear and convergence is usually a problem for traditional iterative schemes. This paper aims to develop a stabilized computational method for nonlinear analysis of 3D bimodular materials. Based on the parametric variational principle, a unified constitutive equation of 3D bimodular materials is proposed, which allows the eight principal stress states to be indicated by three parametric variables introduced in the principal stress directions. The original problem is transformed into a standard linear complementarity problem (LCP) by the parametric virtual work principle and a quadratic programming algorithm is developed by solving the LCP with the classic Lemke's algorithm. Update of elasticity and stiffness matrices is avoided and, thus, the proposed algorithm shows an excellent convergence behavior compared with traditional iterative schemes. Numerical examples show that the proposed method is valid and can accurately analyze mechanical responses of 3D bimodular materials. Also, stability of the algorithm is greatly improved.
NASA Astrophysics Data System (ADS)
Zhou, Qinglong; Long, Yiming
2017-04-01
In this paper, we consider the elliptic collinear solutions of the classical n-body problem, where the n bodies always stay on a straight line, and each of them moves on its own elliptic orbit with the same eccentricity. Such a motion is called an elliptic Euler-Moulton collinear solution. Here we prove that the corresponding linearized Hamiltonian system at such an elliptic Euler-Moulton collinear solution of n-bodies splits into (n-1) independent linear Hamiltonian systems, the first one is the linearized Hamiltonian system of the Kepler 2-body problem at Kepler elliptic orbit, and each of the other (n-2) systems is the essential part of the linearized Hamiltonian system at an elliptic Euler collinear solution of a 3-body problem whose mass parameter is modified. Then the linear stability of such a solution in the n-body problem is reduced to those of the corresponding elliptic Euler collinear solutions of the 3-body problems, which for example then can be further understood using numerical results of Martínez et al. on 3-body Euler solutions in 2004-2006. As an example, we carry out the detailed derivation of the linear stability for an elliptic Euler-Moulton solution of the 4-body problem with two small masses in the middle.
An improved exploratory search technique for pure integer linear programming problems
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1990-01-01
The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
Linear stability analysis in the numerical solution of initial value problems
NASA Astrophysics Data System (ADS)
van Dorsselaer, J. L. M.; Kraaijevanger, J. F. B. M.; Spijker, M. N.
This article addresses the general problem of establishing upper bounds for the norms of the nth powers of square matrices. The focus is on upper bounds that grow only moderately (or stay constant) where n, or the order of the matrices, increases. The so-called resolvant condition, occuring in the famous Kreiss matrix theorem, is a classical tool for deriving such bounds.Recently the classical upper bounds known to be valid under Kreiss's resolvant condition have been improved. Moreover, generalizations of this resolvant condition have been considered so as to widen the range of applications. The main purpose of this article is to review and extend some of these new developments.The upper bounds for the powers of matrices discussed in this article are intimately connected with the stability analysis of numerical processes for solving initial(-boundary) value problems in ordinary and partial linear differential equations. The article highlights this connection.The article concludes with numerical illustrations in the solution of a simple initial-boundary value problem for a partial differential equation.
Carey, G.F.; Young, D.M.
1993-12-31
The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.
Aksenov, V. L.; Kiselev, M. A.
2010-12-15
General problems of the complementarity of different physical methods and specific features of the interaction between neutron and matter and neutron diffraction with respect to the time of flight are discussed. The results of studying the kinetics of structural changes in lipid membranes under hydration and self-assembly of the lipid bilayer in the presence of a detergent are reported. The possibilities of the complementarity of neutron diffraction and X-ray synchrotron radiation and developing a free-electron laser are noted.
NASA Astrophysics Data System (ADS)
Dotti, Gustavo; Gleiser, Reinaldo J.
2009-11-01
The coupled equations for the scalar modes of the linearized Einstein equations around Schwarzschild's spacetime were reduced by Zerilli to a (1+1) wave equation \\partial ^2 \\Psi _z / \\partial t^2 + {\\cal H} \\Psi _z =0 , where {\\cal H} = -\\partial ^2 / \\partial x^2 + V(x) is the Zerilli 'Hamiltonian' and x is the tortoise radial coordinate. From its definition, for smooth metric perturbations the field Ψz is singular at rs = -6M/(ell - 1)(ell +2), with ell being the mode harmonic number. The equation Ψz obeys is also singular, since V has a second-order pole at rs. This is irrelevant to the black hole exterior stability problem, where r > 2M > 0, and rs < 0, but it introduces a non-trivial problem in the naked singular case where M < 0, then rs > 0, and the singularity appears in the relevant range of r (0 < r < ∞). We solve this problem by developing a new approach to the evolution of the even mode, based on a new gauge invariant function, \\hat{\\Psi} , that is a regular function of the metric perturbation for any value of M. The relation of \\hat{\\Psi} to Ψz is provided by an intertwiner operator. The spatial pieces of the (1 + 1) wave equations that \\hat{\\Psi} and Ψz obey are related as a supersymmetric pair of quantum Hamiltonians {\\cal H} and \\hat{\\cal H} . For M<0, \\hat{\\cal H} has a regular potential and a unique self-adjoint extension in a domain {\\cal D} defined by a physically motivated boundary condition at r = 0. This allows us to address the issue of evolution of gravitational perturbations in this non-globally hyperbolic background. This formulation is used to complete the proof of the linear instability of the Schwarzschild naked singularity, by showing that a previously found unstable mode belongs to a complete basis of \\hat{\\cal H} in {\\cal D} , and thus is excitable by generic initial data. This is further illustrated by numerically solving the linearized equations for suitably chosen initial data.
A review of vector convergence acceleration methods, with applications to linear algebra problems
NASA Astrophysics Data System (ADS)
Brezinski, C.; Redivo-Zaglia, M.
In this article, in a few pages, we will try to give an idea of convergence acceleration methods and extrapolation procedures for vector sequences, and to present some applications to linear algebra problems and to the treatment of the Gibbs phenomenon for Fourier series in order to show their effectiveness. The interested reader is referred to the literature for more details. In the bibliography, due to space limitation, we will only give the more recent items, and, for older ones, we refer to Brezinski and Redivo-Zaglia, Extrapolation methods. (Extrapolation Methods. Theory and Practice, North-Holland, 1991). This book also contains, on a magnetic support, a library (in Fortran 77 language) for convergence acceleration algorithms and extrapolation methods.
First-order system least squares for the pure traction problem in planar linear elasticity
Cai, Z.; Manteuffel, T.; McCormick, S.; Parter, S.
1996-12-31
This talk will develop two first-order system least squares (FOSLS) approaches for the solution of the pure traction problem in planar linear elasticity. Both are two-stage algorithms that first solve for the gradients of displacement, then for the displacement itself. One approach, which uses L{sup 2} norms to define the FOSLS functional, is shown under certain H{sup 2} regularity assumptions to admit optimal H{sup 1}-like performance for standard finite element discretization and standard multigrid solution methods that is uniform in the Poisson ratio for all variables. The second approach, which is based on H{sup -1} norms, is shown under general assumptions to admit optimal uniform performance for displacement flux in an L{sup 2} norm and for displacement in an H{sup 1} norm. These methods do not degrade as other methods generally do when the material properties approach the incompressible limit.
Solving the Linear Balance Equation on the Globe as a Generalized Inverse Problem
NASA Technical Reports Server (NTRS)
Lu, Huei-Iin; Robertson, Franklin R.
1999-01-01
A generalized (pseudo) inverse technique was developed to facilitate a better understanding of the numerical effects of tropical singularities inherent in the spectral linear balance equation (LBE). Depending upon the truncation, various levels of determinancy are manifest. The traditional fully-determined (FD) systems give rise to a strong response, while the under-determined (UD) systems yield a weak response to the tropical singularities. The over-determined (OD) systems result in a modest response and a large residual in the tropics. The FD and OD systems can be alternatively solved by the iterative method. Differences in the solutions of an UD system exist between the inverse technique and the iterative method owing to the non- uniqueness of the problem. A realistic balanced wind was obtained by solving the principal components of the spectral LBE in terms of vorticity in an intermediate resolution. Improved solutions were achieved by including the singular-component solutions which best fit the observed wind data.
The efficient solution of the (quietly constrained) noisy, linear regulator problem
NASA Astrophysics Data System (ADS)
Gregory, John; Hughes, H. R.
2007-09-01
In a previous paper we gave a new, natural extension of the calculus of variations/optimal control theory to a (strong) stochastic setting. We now extend the theory of this most fundamental chapter of optimal control in several directions. Most importantly we present a new method of stochastic control, adding Brownian motion which makes the problem "noisy." Secondly, we show how to obtain efficient solutions: direct stochastic integration for simpler problems and/or efficient and accurate numerical methods with a global a priori error of O(h3/2) for more complex problems. Finally, we include "quiet" constraints, i.e. deterministic relationships between the state and control variables. Our theory and results can be immediately restricted to the non "noisy" (deterministic) case yielding efficient, numerical solution techniques and an a priori error of O(h2)E In this event we obtain the most efficient method of solving the (constrained) classical Linear Regulator Problem. Our methods are different from the standard theory of stochastic control. In some cases the solutions coincide or at least are closely related. However, our methods have many advantages including those mentioned above. In addition, our methods more directly follow the motivation and theory of classical (deterministic) optimization which is perhaps the most important area of physical and engineering science. Our results follow from related ideas in the deterministic theory. Thus, our approximation methods follow by guessing at an algorithm, but the proof of global convergence uses stochastic techniques because our trajectories are not differentiable. Along these lines, a general drift term in the trajectory equation is properly viewed as an added constraint and extends ideas given in the deterministic case by the first author.
A linear stability analysis for nonlinear, grey, thermal radiative transfer problems
Wollaber, Allan B.; Larsen, Edward W.
2011-02-20
We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used 'Implicit Monte Carlo' (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or 'Semi-Analog Monte Carlo' (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if {alpha}, the IMC time-discretization parameter, satisfies 0.5 < {alpha} {<=} 1. This is consistent with conventional wisdom. However, we also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.
Systematic regularization of linear inverse solutions of the EEG source localization problem.
Phillips, Christophe; Rugg, Michael D; Fristont, Karl J
2002-09-01
Distributed linear solutions of the EEG source localization problem are used routinely. Here we describe an approach based on the weighted minimum norm method that imposes constraints using anatomical and physiological information derived from other imaging modalities to regularize the solution. In this approach the hyperparameters controlling the degree of regularization are estimated using restricted maximum likelihood (ReML). EEG data are always contaminated by noise, e.g., exogenous noise and background brain activity. The conditional expectation of the source distribution, given the data, is attained by carefully balancing the minimization of the residuals induced by noise and the improbability of the estimates as determined by their priors. This balance is specified by hyperparameters that control the relative importance of fitting and conforming to prior constraints. Here we introduce a systematic approach to this regularization problem, in the context of a linear observation model we have described previously. In this model, basis functions are extracted to reduce the solution space a priori in the spatial and temporal domains. The basis sets are motivated by knowledge of the evoked EEG response and information theory. In this paper we focus on an iterative "expectation-maximization" procedure to jointly estimate the conditional expectation of the source distribution and the ReML hyperparameters on which this solution rests. We used simulated data mixed with real EEG noise to explore the behavior of the approach with various source locations, priors, and noise levels. The results enabled us to conclude: (i) Solutions in the space of informed basis functions have a high face and construct validity, in relation to conventional analyses. (ii) The hyperparameters controlling the degree of regularization vary largely with source geometry and noise. The second conclusion speaks to the usefulness of using adaptative ReML hyperparameter estimates.
Skill complementarity enhances heterophily in collaboration networks
NASA Astrophysics Data System (ADS)
Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene
2016-01-01
Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems.
Strong gravitational lensing and dark energy complementarity
Linder, Eric V.
2004-01-21
In the search for the nature of dark energy most cosmological probes measure simple functions of the expansion rate. While powerful, these all involve roughly the same dependence on the dark energy equation of state parameters, with anticorrelation between its present value w{sub 0} and time variation w{sub a}. Quantities that have instead positive correlation and so a sensitivity direction largely orthogonal to, e.g., distance probes offer the hope of achieving tight constraints through complementarity. Such quantities are found in strong gravitational lensing observations of image separations and time delays. While degeneracy between cosmological parameters prevents full complementarity, strong lensing measurements to 1 percent accuracy can improve equation of state characterization by 15-50 percent. Next generation surveys should provide data on roughly 105 lens systems, though systematic errors will remain challenging.
Skill complementarity enhances heterophily in collaboration networks.
Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H Eugene
2016-01-08
Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems.
Skill complementarity enhances heterophily in collaboration networks
Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene
2016-01-01
Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems. PMID:26743687
Low energy description of quantum gravity and complementarity
NASA Astrophysics Data System (ADS)
Nomura, Yasunori; Varela, Jaime; Weinberg, Sean J.
2014-06-01
We consider a framework in which low energy dynamics of quantum gravity is described preserving locality, and yet taking into account the effects that are not captured by the naive global spacetime picture, e.g. those associated with black hole complementarity. Our framework employs a "special relativistic" description of gravity; specifically, gravity is treated as a force measured by the observer tied to the coordinate system associated with a freely falling local Lorentz frame. We identify, in simple cases, regions of spacetime in which low energy local descriptions are applicable as viewed from the freely falling frame; in particular, we identify a surface called the gravitational observer horizon on which the local proper acceleration measured in the observer's coordinates becomes the cutoff (string) scale. This allows for separating between the "low-energy" local physics and "trans-Planckian" intrinsically quantum gravitational (stringy) physics, and allows for developing physical pictures of the origins of various effects. We explore the structure of the Hilbert space in which the proposed scheme is realized in a simple manner, and classify its elements according to certain horizons they possess. We also discuss implications of our framework on the firewall problem. We conjecture that the complementarity picture may persist due to properties of trans-Planckian physics.
NASA Technical Reports Server (NTRS)
Antoniewicz, Robert F.; Duke, Eugene L.; Menon, P. K. A.
1991-01-01
The design of nonlinear controllers has relied on the use of detailed aerodynamic and engine models that must be associated with the control law in the flight system implementation. Many of these controllers were applied to vehicle flight path control problems and have attempted to combine both inner- and outer-loop control functions in a single controller. An approach to the nonlinear trajectory control problem is presented. This approach uses linearizing transformations with measurement feedback to eliminate the need for detailed aircraft models in outer-loop control applications. By applying this approach and separating the inner-loop and outer-loop functions two things were achieved: (1) the need for incorporating detailed aerodynamic models in the controller is obviated; and (2) the controller is more easily incorporated into existing aircraft flight control systems. An implementation of the controller is discussed, and this controller is tested on a six degree-of-freedom F-15 simulation and in flight on an F-15 aircraft. Simulation data are presented which validates this approach over a large portion of the F-15 flight envelope. Proof of this concept is provided by flight-test data that closely matches simulation results. Flight-test data are also presented.
NASA Astrophysics Data System (ADS)
Yang, Bian-Xia; Sun, Hong-Rui; Feng, Zhaosheng
In this paper, we are concerned with the unilateral global bifurcation structure of fractional differential equation (‑Δ)αu(x) = λa(x)u(x) + F(x,u,λ),x ∈ Ω,u = 0,inℝN\\Ω with nondifferentiable nonlinearity F. It shows that there are two distinct unbounded subcontinua 𝒞+ and 𝒞‑ consisting of the continuum 𝒞 emanating from [λ1 ‑ d,λ1 + d] ×{0}, and two unbounded subcontinua 𝒟+ and 𝒟‑ consisting of the continuum 𝒟 emanating from [λ1 ‑d¯,λ1 + d¯] ×{∞}. As an application of this unilateral global bifurcation results, we present the existence of the principal half-eigenvalues of the half-linear fractional eigenvalue problem. Finally, we deal with the existence of constant sign solutions for a class of fractional nonlinear problems. Main results of this paper generalize the known results on classical Laplace operators to fractional Laplace operators.
Response of Non-Linear Shock Absorbers-Boundary Value Problem Analysis
NASA Astrophysics Data System (ADS)
Rahman, M. A.; Ahmed, U.; Uddin, M. S.
2013-08-01
A nonlinear boundary value problem of two degrees-of-freedom (DOF) untuned vibration damper systems using nonlinear springs and dampers has been numerically studied. As far as untuned damper is concerned, sixteen different combinations of linear and nonlinear springs and dampers have been comprehensively analyzed taking into account transient terms. For different cases, a comparative study is made for response versus time for different spring and damper types at three important frequency ratios: one at r = 1, one at r > 1 and one at r <1. The response of the system is changed because of the spring and damper nonlinearities; the change is different for different cases. Accordingly, an initially stable absorber may become unstable with time and vice versa. The analysis also shows that higher nonlinearity terms make the system more unstable. Numerical simulation includes transient vibrations. Although problems are much more complicated compared to those for a tuned absorber, a comparison of the results generated by the present numerical scheme with the exact one shows quite a reasonable agreement
Method of expanding hyperspheres - an interior algorithm for linear programming problems
Chandrupatla, T.
1994-12-31
A new interior algorithm using some properties of hyperspheres is proposed for the solution of linear programming problems with inequality constraints: maximize c{sup T} x subject to Ax {<=} b where c and rows of A are normalized in the Euclidean sense such that {parallel} c {parallel} = {radical}c{sup T}c = 1 {parallel} a{sub i} {parallel} {radical} A{sub i}A{sub i}{sup T} = 1 for i = 1 to m. The feasible region in the polytope bounded by the constraint planes. We start from an interior point. We pass a plane normal to c until it touches a constraint plane. Then the sphere is expanded so that it keeps contact with the previously touched planes and the expansion proceeds till it touches another plane. The procedure is continued till the sphere touches the c-plane and n constraint planes. We move to the center of the sphere and repeat the process. The interior maximum is reached when the radius of the expanded sphere is less than a critical value say {epsilon}. Problems of direction finding, determination of incoming constraint, sphere jamming, and evaluation of the initial feasible point are discussed.
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
Haider, M A; Guilak, F
2000-06-01
The micropipette aspiration test has been used extensively in recent years as a means of quantifying cellular mechanics and molecular interactions at the microscopic scale. However, previous studies have generally modeled the cell as an infinite half-space in order to develop an analytical solution for a viscoelastic solid cell. In this study, an axisymmetric boundary integral formulation of the governing equations of incompressible linear viscoelasticity is presented and used to simulate the micropipette aspiration contact problem. The cell is idealized as a homogeneous and isotropic continuum with constitutive equation given by three-parameter (E, tau 1, tau 2) standard linear viscoelasticity. The formulation is used to develop a computational model via a "correspondence principle" in which the solution is written as the sum of a homogeneous (elastic) part and a nonhomogeneous part, which depends only on past values of the solution. Via a time-marching scheme, the solution of the viscoelastic problem is obtained by employing an elastic boundary element method with modified boundary conditions. The accuracy and convergence of the time-marching scheme are verified using an analytical solution. An incremental reformulation of the scheme is presented to facilitate the simulation of micropipette aspiration, a nonlinear contact problem. In contrast to the halfspace model (Sato et al., 1990), this computational model accounts for nonlinearities in the cell response that result from a consideration of geometric factors including the finite cell dimension (radius R), curvature of the cell boundary, evolution of the cell-micropipette contact region, and curvature of the edges of the micropipette (inner radius a, edge curvature radius epsilon). Using 60 quadratic boundary elements, a micropipette aspiration creep test with ramp time t* = 0.1 s and ramp pressure p*/E = 0.8 is simulated for the cases a/R = 0.3, 0.4, 0.5 using mean parameter values for primary chondrocytes
NASA Astrophysics Data System (ADS)
Zhou, Qinglong; Long, Yiming
2015-06-01
In this paper, we prove that the linearized system of elliptic triangle homographic solution of planar charged three-body problem can be transformed to that of the elliptic equilateral triangle solution of the planar classical three-body problem. Consequently, the results of Martínez, Samà and Simó (2006) [15] and results of Hu, Long and Sun (2014) [6] can be applied to these solutions of the charged three-body problem to get their linear stability.
Horizons of description: Black holes and complementarity
NASA Astrophysics Data System (ADS)
Bokulich, Peter Joshua Martin
Niels Bohr famously argued that a consistent understanding of quantum mechanics requires a new epistemic framework, which he named complementarity . This position asserts that even in the context of quantum theory, classical concepts must be used to understand and communicate measurement results. The apparent conflict between certain classical descriptions is avoided by recognizing that their application now crucially depends on the measurement context. Recently it has been argued that a new form of complementarity can provide a solution to the so-called information loss paradox. Stephen Hawking argues that the evolution of black holes cannot be described by standard unitary quantum evolution, because such evolution always preserves information, while the evaporation of a black hole will imply that any information that fell into it is irrevocably lost---hence a "paradox." Some researchers in quantum gravity have argued that this paradox can be resolved if one interprets certain seemingly incompatible descriptions of events around black holes as instead being complementary. In this dissertation I assess the extent to which this black hole complementarity can be undergirded by Bohr's account of the limitations of classical concepts. I begin by offering an interpretation of Bohr's complementarity and the role that it plays in his philosophy of quantum theory. After clarifying the nature of classical concepts, I offer an account of the limitations these concepts face, and argue that Bohr's appeal to disturbance is best understood as referring to these conceptual limits. Following preparatory chapters on issues in quantum field theory and black hole mechanics, I offer an analysis of the information loss paradox and various responses to it. I consider the three most prominent accounts of black hole complementarity and argue that they fail to offer sufficient justification for the proposed incompatibility between descriptions. The lesson that emerges from this
Wu, Z; Zhang, Y
2008-01-01
The double digestion problem for DNA restriction mapping has been proved to be NP-complete and intractable if the numbers of the DNA fragments become large. Several approaches to the problem have been tested and proved to be effective only for small problems. In this paper, we formulate the problem as a mixed-integer linear program (MIP) by following (Waterman, 1995) in a slightly different form. With this formulation and using state-of-the-art integer programming techniques, we can solve randomly generated problems whose search space sizes are many-magnitude larger than previously reported testing sizes.
APPLICATION OF LINEAR PROGRAMMING TO FACILITY MAINTENANCE PROBLEMS IN THE NAVY SHORE ESTABLISHMENT.
LINEAR PROGRAMMING ), (*NAVAL SHORE FACILITIES, MAINTENANCE), (*MAINTENANCE, COSTS, MATHEMATICAL MODELS, MANAGEMENT PLANNING AND CONTROL, MANPOWER, FEASIBILITY STUDIES, OPTIMIZATION, MANAGEMENT ENGINEERING.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
1998-01-01
Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.
A Mixed Integer Linear Program for Solving a Multiple Route Taxi Scheduling Problem
NASA Technical Reports Server (NTRS)
Montoya, Justin Vincent; Wood, Zachary Paul; Rathinam, Sivakumar; Malik, Waqar Ahmad
2010-01-01
Aircraft movements on taxiways at busy airports often create bottlenecks. This paper introduces a mixed integer linear program to solve a Multiple Route Aircraft Taxi Scheduling Problem. The outputs of the model are in the form of optimal taxi schedules, which include routing decisions for taxiing aircraft. The model extends an existing single route formulation to include routing decisions. An efficient comparison framework compares the multi-route formulation and the single route formulation. The multi-route model is exercised for east side airport surface traffic at Dallas/Fort Worth International Airport to determine if any arrival taxi time savings can be achieved by allowing arrivals to have two taxi routes: a route that crosses an active departure runway and a perimeter route that avoids the crossing. Results indicate that the multi-route formulation yields reduced arrival taxi times over the single route formulation only when a perimeter taxiway is used. In conditions where the departure aircraft are given an optimal and fixed takeoff sequence, accumulative arrival taxi time savings in the multi-route formulation can be as high as 3.6 hours more than the single route formulation. If the departure sequence is not optimal, the multi-route formulation results in less taxi time savings made over the single route formulation, but the average arrival taxi time is significantly decreased.
Evaluation of parallel direct sparse linear solvers in electromagnetic geophysical problems
NASA Astrophysics Data System (ADS)
Puzyrev, Vladimir; Koric, Seid; Wilkin, Scott
2016-04-01
High performance computing is absolutely necessary for large-scale geophysical simulations. In order to obtain a realistic image of a geologically complex area, industrial surveys collect vast amounts of data making the computational cost extremely high for the subsequent simulations. A major computational bottleneck of modeling and inversion algorithms is solving the large sparse systems of linear ill-conditioned equations in complex domains with multiple right hand sides. Recently, parallel direct solvers have been successfully applied to multi-source seismic and electromagnetic problems. These methods are robust and exhibit good performance, but often require large amounts of memory and have limited scalability. In this paper, we evaluate modern direct solvers on large-scale modeling examples that previously were considered unachievable with these methods. Performance and scalability tests utilizing up to 65,536 cores on the Blue Waters supercomputer clearly illustrate the robustness, efficiency and competitiveness of direct solvers compared to iterative techniques. Wide use of direct methods utilizing modern parallel architectures will allow modeling tools to accurately support multi-source surveys and 3D data acquisition geometries, thus promoting a more efficient use of the electromagnetic methods in geophysics.
NASA Astrophysics Data System (ADS)
Siddheshwar, P. G.; Mahabaleswar, U. S.; Andersson, H. I.
2013-08-01
The paper discusses a new analytical procedure for solving the non-linear boundary layer equation arising in a linear stretching sheet problem involving a Newtonian/non-Newtonian liquid. On using a technique akin to perturbation the problem gives rise to a system of non-linear governing differential equations that are solved exactly. An analytical expression is obtained for the stream function and velocity as a function of the stretching parameters. The Clairaut equation is obtained on consideration of consistency and its solution is shown to be that of the stretching sheet boundary layer equation. The present study throws light on the analytical solution of a class of boundary layer equations arising in the stretching sheet problem
Zainudin, Suhaila; Arif, Shereena M.
2017-01-01
Gene regulatory network (GRN) reconstruction is the process of identifying regulatory gene interactions from experimental data through computational analysis. One of the main reasons for the reduced performance of previous GRN methods had been inaccurate prediction of cascade motifs. Cascade error is defined as the wrong prediction of cascade motifs, where an indirect interaction is misinterpreted as a direct interaction. Despite the active research on various GRN prediction methods, the discussion on specific methods to solve problems related to cascade errors is still lacking. In fact, the experiments conducted by the past studies were not specifically geared towards proving the ability of GRN prediction methods in avoiding the occurrences of cascade errors. Hence, this research aims to propose Multiple Linear Regression (MLR) to infer GRN from gene expression data and to avoid wrongly inferring of an indirect interaction (A → B → C) as a direct interaction (A → C). Since the number of observations of the real experiment datasets was far less than the number of predictors, some predictors were eliminated by extracting the random subnetworks from global interaction networks via an established extraction method. In addition, the experiment was extended to assess the effectiveness of MLR in dealing with cascade error by using a novel experimental procedure that had been proposed in this work. The experiment revealed that the number of cascade errors had been very minimal. Apart from that, the Belsley collinearity test proved that multicollinearity did affect the datasets used in this experiment greatly. All the tested subnetworks obtained satisfactory results, with AUROC values above 0.5. PMID:28250767
The late Universe with non-linear interaction in the dark sector: The coincidence problem
NASA Astrophysics Data System (ADS)
Bouhmadi-López, Mariam; Morais, João; Zhuk, Alexander
2016-12-01
We study the Universe at the late stage of its evolution and deep inside the cell of uniformity. At such a scale the Universe is highly inhomogeneous and filled with discretely distributed inhomogeneities in the form of galaxies and groups of galaxies. As a matter source, we consider dark matter (DM) and dark energy (DE) with a non-linear interaction Q = 3 HγεbarDEεbarDM /(εbarDE +εbarDM) , where γ is a constant. We assume that DM is pressureless and DE has a constant equation of state parameter w. In the considered model, the energy densities of the dark sector components present a scaling behaviour with εbarDM /εbarDE ∼(a0 / a) - 3(w + γ). We investigate the possibility that the perturbations of DM and DE, which are interacting among themselves, could be coupled to the galaxies with the former being concentrated around them. To carry our analysis, we consider the theory of scalar perturbations (within the mechanical approach), and obtain the sets of parameters (w , γ) which do not contradict it. We conclude that two sets: (w = - 2 / 3 , γ = 1 / 3) and (w = - 1 , γ = 1 / 3) are of special interest. First, the energy densities of DM and DE on these cases are concentrated around galaxies confirming that they are coupled fluids. Second, we show that for both of them, the coincidence problem is less severe than in the standard ΛCDM. Third, the set (w = - 1 , γ = 1 / 3) is within the observational constraints. Finally, we also obtain an expression for the gravitational potential in the considered model.
Anastassi, Z. A.; Simos, T. E.
2010-09-30
We develop a new family of explicit symmetric linear multistep methods for the efficient numerical solution of the Schroedinger equation and related problems with oscillatory solution. The new methods are trigonometrically fitted and have improved intervals of periodicity as compared to the corresponding classical method with constant coefficients and other methods from the literature. We also apply the methods along with other known methods to real periodic problems, in order to measure their efficiency.
The Limits of Black Hole Complementarity
NASA Astrophysics Data System (ADS)
Susskind, Leonard
Black hole complementarity, as originally formulated in the 1990's by Preskill, 't Hooft, and myself is now being challenged by the Almheiri-Marolf-Polchinski-Sully firewall argument. The AMPS argument relies on an implicit assumption—the "proximity" postulate—which says that the interior of a black hole must be constructed from degrees of freedom that are physically near the black hole. The proximity postulate manifestly contradicts the idea that interior information is redundant with information in Hawking radiation, which is very far from the black hole. AMPS argue that a violation of the proximity postulate would lead to a contradiction in a thought-experiment in which Alice distills the Hawking radiation and brings a bit back to the black hole. According to AMPS the only way to protect against the contradiction is for a firewall to form at the Page time. But the measurement that Alice must make, is of such a fine-grained nature that carrying it out before the black hole evaporates may be impossible. Harlow and Hayden have found evidence that the limits of quantum computation do in fact prevent Alice from carrying out her experiment in less than exponential time. If their conjecture is correct then black hole complementarity may be alive and well. My aim here is to give an overview of the firewall argument, and its basis in the proximity postulate; as well as the counterargument based on computational complexity, as conjectured by Harlow and Hayden.
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
NASA Astrophysics Data System (ADS)
Rozhdestvenskaya, Ekaterina A.
2011-02-01
The existence of a solution of the Dirichlet problem for a second order elliptic equation with non-linear part discontinuous in the phase variable is proved in the cases of resonance on the left and resonance on the right of the first eigenvalue of the differential operator in the situation where the Landesman-Lazer conditions do not hold.
NASA Astrophysics Data System (ADS)
Renac, Florent
2011-06-01
An algorithm for stabilizing linear iterative schemes is developed in this study. The recursive projection method is applied in order to stabilize divergent numerical algorithms. A criterion for selecting the divergent subspace of the iteration matrix with an approximate eigenvalue problem is introduced. The performance of the present algorithm is investigated in terms of storage requirements and CPU costs and is compared to the original Krylov criterion. Theoretical results on the divergent subspace selection accuracy are established. The method is then applied to the resolution of the linear advection-diffusion equation and to a sensitivity analysis for a turbulent transonic flow in the context of aerodynamic shape optimization. Numerical experiments demonstrate better robustness and faster convergence properties of the stabilization algorithm with the new criterion based on the approximate eigenvalue problem. This criterion requires only slight additional operations and memory which vanish in the limit of large linear systems.
Růzek, Michal; Sedlák, Petr; Seiner, Hanus; Kruisová, Alena; Landa, Michal
2010-12-01
In this paper, linearized approximations of both the forward and the inverse problems of resonant ultrasound spectroscopy for the determination of mechanical properties of thin surface layers are presented. The linear relations between the frequency shifts induced by the deposition of the layer and the in-plane elastic coefficients of the layer are derived and inverted, the applicability range of the obtained linear model is discussed by a comparison with nonlinear models and finite element method (FEM), and an algorithm for the estimation of experimental errors in the inversely determined elastic coefficients is described. In the final part of the paper, the linearized inverse procedure is applied to evaluate elastic coefficients of a 310 nm thick diamond-like carbon layer deposited on a silicon substrate.
Beklaryan, Leva A
2011-03-31
A boundary value problem and an initial-boundary value problems are considered for a linear functional differential equation of point type. A suitable scale of functional spaces is introduced and existence theorems for solutions are stated in terms of this scale, in a form analogous to Noether's theorem. A key fact is established for the initial boundary value problem: the space of classical solutions of the adjoint equation must be extended to include impulsive solutions. A test for the pointwise completeness of solutions is obtained. The results presented are based on a formalism developed by the author for this type of equation. Bibliography: 7 titles.
NASA Technical Reports Server (NTRS)
Halyo, N.; Caglayan, A. K.
1976-01-01
This paper considers the control of a continuous linear plant disturbed by white plant noise when the control is constrained to be a piecewise constant function of time; i.e. a stochastic sampled-data system. The cost function is the integral of quadratic error terms in the state and control, thus penalizing errors at every instant of time while the plant noise disturbs the system continuously. The problem is solved by reducing the constrained continuous problem to an unconstrained discrete one. It is shown that the separation principle for estimation and control still holds for this problem when the plant disturbance and measurement noise are Gaussian.
NASA Astrophysics Data System (ADS)
Hellmich, Ch.; Ulm, F.-J.; Mang, H. A.
In this work, after a short review of the respective thermodynamic formulation, the algorithmic treatment of coupled chemo-thermal problems with exo- or endothermal reactions is addressed. The Finite Element Method (FEM) is serving as the analysis tool. Consistent linearization of the discretized evolution equations results in quadratic convergence of the global Newton-Raphson equilibrium iteration. This renders solutions of practical engineering problems feasible. The range of these problems encompasses the early age behavior of concrete as well as agricultural applications. In order to demonstrate the applicability of the presented material law, a 3D material test for shotcrete is re-analyzed.
ERIC Educational Resources Information Center
Sole, Marla A.
2016-01-01
Open-ended questions that can be solved using different strategies help students learn and integrate content, and provide teachers with greater insights into students' unique capabilities and levels of understanding. This article provides a problem that was modified to allow for multiple approaches. Students tended to employ high-powered, complex,…
Black hole complementarity in gravity's rainbow
Gim, Yongwan; Kim, Wontae E-mail: wtkim@sogang.ac.kr
2015-05-01
To see how the gravity's rainbow works for black hole complementary, we evaluate the required energy for duplication of information in the context of black hole complementarity by calculating the critical value of the rainbow parameter in the certain class of the rainbow Schwarzschild black hole. The resultant energy can be written as the well-defined limit for the vanishing rainbow parameter which characterizes the deformation of the relativistic dispersion relation in the freely falling frame. It shows that the duplication of information in quantum mechanics could not be allowed below a certain critical value of the rainbow parameter; however, it might be possible above the critical value of the rainbow parameter, so that the consistent formulation in our model requires additional constraints or any other resolutions for the latter case.
Dark matter complementarity and the Z' portal
NASA Astrophysics Data System (ADS)
Alves, Alexandre; Berlin, Asher; Profumo, Stefano; Queiroz, Farinaldo S.
2015-10-01
Z' gauge bosons arise in many particle physics models as mediators between the dark and visible sectors. We exploit dark matter (DM) complementarity and derive stringent and robust collider, direct and indirect constraints, as well as limits from the muon magnetic moment. We rule out almost the entire region of the parameter space that yields the right dark matter thermal relic abundance, using a generic parametrization of the Z'-fermion couplings normalized to the standard model Z-fermion couplings for dark matter masses in the 8 GeV-5 TeV range. We conclude that mediators lighter than 2.1 TeV are excluded regardless of the DM mass, and that depending on the Z'-fermion coupling strength much heavier masses are needed to reproduce the DM thermal relic abundance while avoiding existing limits.
Horizon complementarity in elliptic de Sitter space
NASA Astrophysics Data System (ADS)
Hackl, Lucas; Neiman, Yasha
2015-02-01
We study a quantum field in elliptic de Sitter space dS4/Z2—the spacetime obtained from identifying antipodal points in dS4. We find that the operator algebra and Hilbert space cannot be defined for the entire space, but only for observable causal patches. This makes the system into an explicit realization of the horizon complementarity principle. In the absence of a global quantum theory, we propose a recipe for translating operators and states between observers. This translation involves information loss, in accordance with the fact that two observers see different patches of the spacetime. As a check, we recover the thermal state at the de Sitter temperature as a state that appears the same to all observers. This thermal state arises from the same functional that, in ordinary dS4, describes the Bunch-Davies vacuum.
A Novel Numerical Algorithm of Numerov Type for 2D Quasi-linear Elliptic Boundary Value Problems
NASA Astrophysics Data System (ADS)
Mohanty, R. K.; Kumar, Ravindra
2014-11-01
In this article, using three function evaluations, we discuss a nine-point compact scheme of O(Δ y2 + Δ x4) based on Numerov-type discretization for the solution of 2D quasi-linear elliptic equations with given Dirichlet boundary conditions, where Δy > 0 and Δx > 0 are grid sizes in y- and x-directions, respectively. Iterative methods for diffusion-convection equation are discussed in detail. We use block iterative methods to solve the system of algebraic linear and nonlinear difference equations. Comparative results of some physical problems are given to illustrate the usefulness of the proposed method.
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
Influence of geometrical parameters on the linear stability of a Bénard-Marangoni problem
NASA Astrophysics Data System (ADS)
Hoyas, S.; Fajardo, P.; Pérez-Quiles, M. J.
2016-04-01
A linear stability analysis of a thin liquid film flowing over a plate is performed. The analysis is performed in an annular domain when momentum diffusivity and thermal diffusivity are comparable (relatively low Prandtl number, Pr =1.2 ). The influence of the aspect ratio (Γ ) and gravity, through the Bond number (Bo ), in the linear stability of the flow are analyzed together. Two different regions in the Γ -Bo plane have been identified. In the first one the basic state presents a linear regime (in which the temperature gradient does not change sign with r ). In the second one, the flow presents a nonlinear regime, also called return flow. A great diversity of bifurcations have been found just by changing the domain depth d . The results obtained in this work are in agreement with some reported experiments, and give a deeper insight into the effect of physical parameters on bifurcations.
Influence of geometrical parameters on the linear stability of a Bénard-Marangoni problem.
Hoyas, S; Fajardo, P; Pérez-Quiles, M J
2016-04-01
A linear stability analysis of a thin liquid film flowing over a plate is performed. The analysis is performed in an annular domain when momentum diffusivity and thermal diffusivity are comparable (relatively low Prandtl number, Pr=1.2). The influence of the aspect ratio (Γ) and gravity, through the Bond number (Bo), in the linear stability of the flow are analyzed together. Two different regions in the Γ-Bo plane have been identified. In the first one the basic state presents a linear regime (in which the temperature gradient does not change sign with r). In the second one, the flow presents a nonlinear regime, also called return flow. A great diversity of bifurcations have been found just by changing the domain depth d. The results obtained in this work are in agreement with some reported experiments, and give a deeper insight into the effect of physical parameters on bifurcations.
Kinematics and tribological problems of linear guidance systems in four contact points
NASA Astrophysics Data System (ADS)
Popescu, A.; Olaru, D.
2016-08-01
A procedure has been developed to determine both the value of the ball's angular velocity and the angular position of this velocity, according to the normal loads in a linear system with four contact points. The program is based on the variational analysis of the power losses in ball-races contacts. Based on this the two kinematics parameters of the ball (angular velocity and angular position) were determined, in a linear system type KUE 35 as function of the C/P ratio.
NASA Technical Reports Server (NTRS)
Sain, M. K.; Antsaklis, P. J.; Gejji, R. R.; Wyman, B. F.; Peczkowski, J. L.
1981-01-01
Zames (1981) has observed that there is, in general, no 'separation principle' to guarantee optimality of a division between control law design and filtering of plant uncertainty. Peczkowski and Sain (1978) have solved a model matching problem using transfer functions. Taking into consideration this investigation, Peczkowski et al. (1979) proposed the Total Synthesis Problem (TSP), wherein both the command/output-response and command/control-response are to be synthesized, subject to the plant constraint. The TSP concept can be subdivided into a Nominal Design Problem (NDP), which is not dependent upon specific controller structures, and a Feedback Synthesis Problem (FSP), which is. Gejji (1980) found that NDP was characterized in terms of the plant structural matrices and a single, 'good' transfer function matrix. Sain et al. (1981) have extended this NDP work. The present investigation is concerned with a study of FSP for the unity feedback case. NDP, together with feedback synthesis, is understood as a Total Synthesis Problem.
General theory of spherically symmetric boundary-value problems of the linear transport theory.
NASA Technical Reports Server (NTRS)
Kanal, M.
1972-01-01
A general theory of spherically symmetric boundary-value problems of the one-speed neutron transport theory is presented. The formulation is also applicable to the 'gray' problems of radiative transfer. The Green's function for the purely absorbing medium is utilized in obtaining the normal mode expansion of the angular densities for both interior and exterior problems. As the integral equations for unknown coefficients are regular, a general class of reduction operators is introduced to reduce such regular integral equations to singular ones with a Cauchy-type kernel. Such operators then permit one to solve the singular integral equations by the standard techniques due to Muskhelishvili. We discuss several spherically symmetric problems. However, the treatment is kept sufficiently general to deal with problems lacking azimuthal symmetry. In particular the procedure seems to work for regions whose boundary coincides with one of the coordinate surfaces for which the Helmholtz equation is separable.
NASA Technical Reports Server (NTRS)
Rosen, I. G.; Wang, C.
1990-01-01
The convergence of solutions to the discrete or sampled time linear quadratic regulator problem and associated Riccati equation for infinite dimensional systems to the solutions to the corresponding continuous time problem and equation, as the length of the sampling interval (the sampling rate) tends toward zero (infinity) is established. Both the finite and infinite time horizon problems are studied. In the finite time horizon case, strong continuity of the operators which define the control system and performance index together with a stability and consistency condition on the sampling scheme are required. For the infinite time horizon problem, in addition, the sampled systems must be stabilizable and detectable, uniformly with respect to the sampling rate. Classes of systems for which this condition can be verified are discussed. Results of numerical studies involving the control of a heat/diffusion equation, a hereditary of delay system, and a flexible beam are presented and discussed.
NASA Astrophysics Data System (ADS)
Noor-E-Alam, Md.; Doucette, John
2015-08-01
Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.
NASA Technical Reports Server (NTRS)
Kleinman, D. L.
1976-01-01
A numerical technique is given for solving the matrix quadratic equation that arises in the optimal stationary control of linear systems with state (and/or control) dependent noise. The technique exploits fully existing, efficient algorithms for the matrix Lyapunov and Ricatti equations. The computational requirements are discussed, with an associated example.
Constructive Processes in Linear Order Problems Revealed by Sentence Study Times
ERIC Educational Resources Information Center
Mynatt, Barbee T.; Smith, Kirk H.
1977-01-01
This research was a further test of the theory of constructive processes proposed by Foos, Smith, Sabol, and Mynatt (1976) to account for differences among presentation orders in the construction of linear orders. This theory is composed of different series of mental operations that must be performed when an order relationship is integrated with…
NASA Astrophysics Data System (ADS)
Maksimyuk, V. A.; Storozhuk, E. A.; Chernyshenko, I. S.
2012-11-01
Variational finite-difference methods of solving linear and nonlinear problems for thin and nonthin shells (plates) made of homogeneous isotropic (metallic) and orthotropic (composite) materials are analyzed and their classification principles and structure are discussed. Scalar and vector variational finite-difference methods that implement the Kirchhoff-Love hypotheses analytically or algorithmically using Lagrange multipliers are outlined. The Timoshenko hypotheses are implemented in a traditional way, i.e., analytically. The stress-strain state of metallic and composite shells of complex geometry is analyzed numerically. The numerical results are presented in the form of graphs and tables and used to assess the efficiency of using the variational finite-difference methods to solve linear and nonlinear problems of the statics of shells (plates)
Generalized Quasi-Variational Inequality and Implicit Complementarity Problems
1989-10-01
of topological spaces and continuous maps. Hence A is a retract of X if and only if there is a continuous map r : X - A such that r(x) = x,V x E A...Programming , Wiley, New York, 1979. [3] C. Berge, Topological Spaces , MacMillan Co. New York, 1963. [4] Y. M. Bershchanskii and M. V. Meerov, "The
NASA Astrophysics Data System (ADS)
Moryakov, A. V.
2016-12-01
An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1978-01-01
The formulation of the classical Linear-Quadratic-Gaussian stochastic control problem as employed in low thrust navigation analysis is reviewed. A reformulation is then presented which eliminates a potentially unreliable matrix subtraction in the control calculations, improves the computational efficiency, and provides for a cleaner computational interface between the estimation and control processes. Lastly, the application of the U-D factorization method to the reformulated equations is examined with the objective of achieving a complete set of factored equations for the joint estimation and control problem.
A discussion of a homogenization procedure for a degenerate linear hyperbolic-parabolic problem
NASA Astrophysics Data System (ADS)
Flodén, L.; Holmbom, A.; Jonasson, P.; Lobkova, T.; Lindberg, M. Olsson; Zhang, Y.
2017-01-01
We study the homogenization of a hyperbolic-parabolic PDE with oscillations in one fast spatial scale. Moreover, the first order time derivative has a degenerate coefficient passing to infinity when ɛ→0. We obtain a local problem which is of elliptic type, while the homogenized problem is also in some sense an elliptic problem but with the limit for ɛ-1∂tuɛ as an undetermined extra source term in the right-hand side. The results are somewhat surprising and work remains to obtain a fully rigorous treatment. Hence the last section is devoted to a discussion of the reasonability of our conjecture including numerical experiments.
Beam dynamics in super-conducting linear accelerator: Problems and solutions
NASA Astrophysics Data System (ADS)
Senichev, Yu.; Bogdanov, A.; Maier, R.; Vasyukhin, N.
2006-03-01
The linac based on SC cavities has special features. Due to specific requirements the SC cavity is desirable to have a constant geometry of the accelerating cells with limited family number of cavities. All cavities are divided into modules, and each module is housed into one cryostat. First of all, such geometry of cavity leads to a non-synchronism. Secondly, the inter-cryostat drift space parametrically perturbs the longitudinal motion. In this article, we study the non-linear resonant effects due to the inter-cryostat drift space, using the separatrix formalism for a super-conducting linear accelerator [Yu. Senichev, A. Bogdanov, R. Maier, Phys. Rev. ST AB 6 (2003) 124001]. Methods to avoid or to compensate the resonant effect are also presented. We consider 3D beam dynamics together with space charge effects. The final lattice meets to all physical requirements.
Complementarity and entanglement in quantum information theory
NASA Astrophysics Data System (ADS)
Tessier, Tracey Edward
This research investigates two inherently quantum mechanical phenomena, namely complementarity and entanglement, from an information-theoretic perspective. Beyond philosophical implications, a thorough grasp of these concepts is crucial for advancing our understanding of foundational issues in quantum mechanics, as well as in studying how the use of quantum systems might enhance the performance of certain information processing tasks. The primary goal of this thesis is to shed light on the natures and interrelationships of these phenomena by approaching them from the point of view afforded by information theory. We attempt to better understand these pillars of quantum mechanics by studying the various ways in which they govern the manipulation of information, while at the same time gaining valuable insight into the roles they play in specific applications. The restrictions that nature places on the distribution of correlations in a multipartite quantum system play fundamental roles in the evolution of such systems and yield vital insights into the design of protocols for the quantum control of ensembles with potential applications in the field of quantum computing. By augmenting the existing formalism for quantifying entangled correlations, we show how this entanglement sharing behavior may be studied in increasingly complex systems of both theoretical and experimental significance. Further, our results shed light on the dynamical generation and evolution of multipartite entanglement by demonstrating that individual members of an ensemble of identical systems coupled to a common probe can become entangled with one another, even when they do not interact directly. The findings presented in this thesis support the conjecture that Hilbert space dimension is an objective property of a quantum system since it constrains the number of valid conceptual divisions of the system into subsystems. These arbitrary observer-induced distinctions are integral to the theory since
A holographic model for black hole complementarity
NASA Astrophysics Data System (ADS)
Lowe, David A.; Thorlacius, Larus
2016-12-01
We explore a version of black hole complementarity, where an approximate semiclassical effective field theory for interior infalling degrees of freedom emerges holo-graphically from an exact evolution of exterior degrees of freedom. The infalling degrees of freedom have a complementary description in terms of outgoing Hawking radiation and must eventually decohere with respect to the exterior Hamiltonian, leading to a breakdown of the semiclassical description for an infaller. Trace distance is used to quantify the difference between the complementary time evolutions, and to define a decoherence time. We propose a dictionary where the evolution with respect to the bulk effective Hamiltonian corresponds to mean field evolution in the holographic theory. In a particular model for the holographic theory, which exhibits fast scrambling, the decoherence time coincides with the scrambling time. The results support the hypothesis that decoherence of the infalling holographic state and disruptive bulk effects near the curvature singularity are comple-mentary descriptions of the same physics, which is an important step toward resolving the black hole information paradox.
Quark lepton complementarity and renormalization group effects
Schmidt, Michael A.; Smirnov, Alexei Yu.
2006-12-01
We consider a scenario for the quark-lepton complementarity relations between mixing angles in which the bimaximal mixing follows from the neutrino mass matrix. According to this scenario in the lowest order the angle {theta}{sub 12} is {approx}1{sigma} (1.5 degree sign -2 degree sign ) above the best fit point coinciding practically with the tribimaximal mixing prediction. Realization of this scenario in the context of the seesaw type-I mechanism with leptonic Dirac mass matrices approximately equal to the quark mass matrices is studied. We calculate the renormalization group corrections to {theta}{sub 12} as well as to {theta}{sub 13} in the standard model (SM) and minimal supersymmetric standard model (MSSM). We find that in a large part of the parameter space corrections {delta}{theta}{sub 12} are small or negligible. In the MSSM version of the scenario, the correction {delta}{theta}{sub 12} is in general positive. Small negative corrections appear in the case of an inverted mass hierarchy and opposite CP parities of {nu}{sub 1} and {nu}{sub 2} when leading contributions to {theta}{sub 12} running are strongly suppressed. The corrections are negative in the SM version in a large part of the parameter space for values of the relative CP phase of {nu}{sub 1} and {nu}{sub 2}: {phi}>{pi}/2.
Bohr's Principle of Complementarity and Beyond
NASA Astrophysics Data System (ADS)
Jones, R.
2004-05-01
All knowledge is of an approximate character and always will be (Russell, Human Knowledge, 1948, pg 497,507). The laws of nature are not unique (Smolin, Three Roads to Quantum Gravity, 2001, pg 195). There may be a number of different sets of equations which describe our data just as well as the present known laws do (Mitchell, Machine Learning, 1997, pg 65-66 and Cooper, Machine Learning, Vol. 9, 1992, pg 319) In the future every field of intellectual study will possess multiple theories of its domain and scientific work and engineering will be performed based on the ensemble predictions of ALL of these. In some cases the theories may be quite divergent, differing greatly one from the other. The idea can be considered an extension of Bohr's notions of complementarity, "...different experimental arrangements.. described by different physical concepts...together and only together exhaust the definable information we can obtain about the object" (Folse, The Philosophy of Niels Bohr, 1985, pg 238). This idea is not postmodernism. Witchdoctor's theories will not form a part of medical science. Objective data, not human opinion, will decide which theories we use and how we weight their predictions.
Linear and nonlinear pattern selection in Rayleigh-Benard stability problems
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
1993-01-01
A new algorithm is introduced to compute finite-amplitude states using primitive variables for Rayleigh-Benard convection on relatively coarse meshes. The algorithm is based on a finite-difference matrix-splitting approach that separates all physical and dimensional effects into one-dimensional subsets. The nonlinear pattern selection process for steady convection in an air-filled square cavity with insulated side walls is investigated for Rayleigh numbers up to 20,000. The internalization of disturbances that evolve into coherent patterns is investigated and transient solutions from linear perturbation theory are compared with and contrasted to the full numerical simulations.
Experimental test of Bohr's complementarity principle with single neutral atoms
NASA Astrophysics Data System (ADS)
Wang, Zhihui; Tian, Yali; Yang, Chen; Zhang, Pengfei; Li, Gang; Zhang, Tiancai
2016-12-01
An experimental test of the quantum complementarity principle based on single neutral atoms trapped in a blue detuned bottle trap was here performed. A Ramsey interferometer was used to assess the wavelike behavior or particlelike behavior with second π /2 rotation on or off. The wavelike behavior or particlelike behavior is characterized by the visibility V of the interference or the predictability P of which-path information, respectively. The measured results fulfill the complementarity relation P2+V2≤1 . Imbalance losses were deliberately introduced to the system and we find the complementarity relation is then formally "violated." All the experimental results can be completely explained theoretically by quantum mechanics without considering the interference between wave and particle behaviors. This observation complements existing information concerning Bohr's complementarity principle based on wave-particle duality of a massive quantum system.
ERIC Educational Resources Information Center
Lawrence, Virginia
No longer just a user of commercial software, the 21st century teacher is a designer of interactive software based on theories of learning. This software, a comprehensive study of straightline equations, enhances conceptual understanding, sketching, graphic interpretive and word problem solving skills as well as making connections to real-life and…
Fan, Yurui; Huang, Guohe; Veawab, Amornvadee
2012-01-01
In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.
The complementarity model of brain-body relationship.
Walach, Harald
2005-01-01
We introduce the complementarity concept to understand mind-body relations and the question why the biopsychosocial model has in fact been praised, but not integrated into medicine. By complementarity, we mean that two incompatible descriptions have to be used to describe something in full. The complementarity model states that the physical and the mental side of the human organism are two complementary notions. This contradicts the prevailing materialist notion that mental and psychological processes are emergent properties of an organism. The complementarity model also has consequences for a further understanding of biological processes. Complementarity is a defining property of quantum systems proper. Such systems exhibit correlated properties that result in coordinated behavior without signal transfer or interaction. This is termed EPR-correlation or entanglement. Weak quantum theory, a generalized version of quantum mechanics proper, predicts entanglement also for macroscopic systems, provided a local and a global observable are complementary. Thus, complementarity could be the key to understanding holistically correlated behavior on different levels of systemic complexity.
The incomplete inverse and its applications to the linear least squares problem
NASA Technical Reports Server (NTRS)
Morduch, G. E.
1977-01-01
A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.
A linear decomposition method for large optimization problems. Blueprint for development
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1982-01-01
A method is proposed for decomposing large optimization problems encountered in the design of engineering systems such as an aircraft into a number of smaller subproblems. The decomposition is achieved by organizing the problem and the subordinated subproblems in a tree hierarchy and optimizing each subsystem separately. Coupling of the subproblems is accounted for by subsequent optimization of the entire system based on sensitivities of the suboptimization problem solutions at each level of the tree to variables of the next higher level. A formalization of the procedure suitable for computer implementation is developed and the state of readiness of the implementation building blocks is reviewed showing that the ingredients for the development are on the shelf. The decomposition method is also shown to be compatible with the natural human organization of the design process of engineering systems. The method is also examined with respect to the trends in computer hardware and software progress to point out that its efficiency can be amplified by network computing using parallel processors.
Fowler, Patrick W; Myrvold, Wendy
2011-11-17
Conjugated-circuit models for induced π ring currents differ in the types of circuit that they include and the weights attached to them. Choice of circuits for general π systems can be expressed compactly in terms of matchings of the circuit-deleted molecular graph. Variants of the conjugated-circuit model for induced π currents are shown to have simple closed-form solutions for linear polyacenes. Despite differing assumptions about the effect of cycle area, all the models predict the most intense perimeter current in the central rings, in general agreement with ab initio current-density maps. All tend to overestimate the rate of increase with N of the central ring current for the [N]polyacene, in comparison with molecular-orbital treatments using ipsocentric ab initio, pseudo-π, and Hückel-London approaches.
Why the Afshar experiment does not refute complementarity
NASA Astrophysics Data System (ADS)
Kastner, R. E.
A modified version of Young's experiment by Shahriar Afshar demonstrates that, prior to what appears to be a "which-way" measurement, an interference pattern exists. Afshar has claimed that this result constitutes a violation of the Principle of Complementarity. This paper discusses the implications of this experiment and considers how Cramer's Transactional Interpretation easily accommodates the result. It is also shown that the Afshar experiment is analogous in key respects to a spin one-half particle prepared as "spin up along x ", subjected to a nondestructive confirmation of that preparation, and post-selected in a specific state of spin along z . The terminology "which-way" or "which-slit" is critiqued; it is argued that this usage by both Afshar and his critics is misleading and has contributed to confusion surrounding the interpretation of the experiment. Nevertheless, it is concluded that Bohr would have had no more problem accounting for the Afshar result than he would in accounting for the aforementioned pre- and post-selection spin experiment, in which the particle's preparation state is confirmed by a nondestructive measurement prior to post-selection. In addition, some new inferences about the interpretation of delayed choice experiments are drawn from the analysis.
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
NASA Technical Reports Server (NTRS)
Heaslet, Max A; Lomax, Harvard
1950-01-01
Following the introduction of the linearized partial differential equation for nonsteady three-dimensional compressible flow, general methods of solution are given for the two and three-dimensional steady-state and two-dimensional unsteady-state equations. It is also pointed out that, in the absence of thickness effects, linear theory yields solutions consistent with the assumptions made when applied to lifting-surface problems for swept-back plan forms at sonic speeds. The solutions of the particular equations are determined in all cases by means of Green's theorem, and thus depend on the use of Green's equivalent layer of sources, sinks, and doublets. Improper integrals in the supersonic theory are treated by means of Hadamard's "finite part" technique.
Non-Linear Problems in NMR: Application of the DFM Variation of Parameters Method
NASA Astrophysics Data System (ADS)
Erker, Jay Charles
This Dissertation introduces, develops, and applies the Dirac-McLachlan-Frenkel (DFM) time dependent variation of parameters approach to Nuclear Magnetic Resonance (NMR) problems. Although never explicitly used in the treatment of time domain NMR problems to date, the DFM approach has successfully predicted the dynamics of optically prepared wave packets on excited state molecular energy surfaces. Unlike the Floquet, average Hamiltonian, and Van Vleck transformation methods, the DFM approach is not restricted by either the size or symmetry of the time domain perturbation. A particularly attractive feature of the DFM method is that measured data can be used to motivate a parameterized trial function choice and that the DFM theory provides the machinery to provide the optimum, minimum error choices for these parameters. Indeed a poor parameterized trial function choice will lead to a poor match with real experiments, even with optimized parameters. Although there are many NMR problems available to demonstrate the application of the DFM variation of parameters, five separate cases that have escaped analytical solution and thus require numerical methods are considered here: molecular diffusion in a magnetic field gradient, radiation damping in the presence of inhomogeneous broadening, multi-site chemical exchange, and the combination of molecular diffusion in a magnetic field gradient with chemical exchange. The application to diffusion in a gradient is used as an example to develop the DFM method for application to NMR. The existence of a known analytical solution and experimental results allows for direct comparison between the theoretical results of the DFM method and Torrey's solution to the Bloch equations corrected for molecular diffusion. The framework of writing classical Bloch equations in matrix notation is then applied to problems without analytical solution. The second example includes the generation of a semi-analytical functional form for the free
1986-01-01
1985), 1-44. [19] V. Majer, Numerical solution of boundary value problems for ordinary differential equations of nonlinear elasticity, Ph.D. Thesis, Univ...based on the ffactoriza- tion method. 1 INTRODUCTION 1.1 Numerical methods for linear boundary value problems for ordinary differential equations The...numerical solution of linear boundary value problems for ordinary differential eqIuations are presented. The methods are optimal with respect to certain
Exact analysis to any order of the linear coupling problem in the thin lens model
Ruggiero, A.G.
1991-12-31
In this report we attempt the exact solution of the motion of a charged particle in a circular accelerator under the effects of skew quadrupole errors. We adopt the model of error distributions, lumped in locations with zero extensions. This thin-lens approximation provides an analytical insight to the problem to any order. The total solution is expressed in terms of driving terms which are actually correlation factors to several order. An application follows on the calculation and correction of tune-splitting and on the estimate of the role the higher-order terms play in the correction method.
Exact analysis to any order of the linear coupling problem in the thin lens model
Ruggiero, A.G.
1991-01-01
In this report we attempt the exact solution of the motion of a charged particle in a circular accelerator under the effects of skew quadrupole errors. We adopt the model of error distributions, lumped in locations with zero extensions. This thin-lens approximation provides an analytical insight to the problem to any order. The total solution is expressed in terms of driving terms which are actually correlation factors to several order. An application follows on the calculation and correction of tune-splitting and on the estimate of the role the higher-order terms play in the correction method.
Extended cubic B-spline method for solving a linear system of second-order boundary value problems.
Heilat, Ahmed Salem; Hamid, Nur Nadiah Abd; Ismail, Ahmad Izani Md
2016-01-01
A method based on extended cubic B-spline is proposed to solve a linear system of second-order boundary value problems. In this method, two free parameters, [Formula: see text] and [Formula: see text], play an important role in producing accurate results. Optimization of these parameters are carried out and the truncation error is calculated. This method is tested on three examples. The examples suggest that this method produces comparable or more accurate results than cubic B-spline and some other methods.
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.
2010-01-01
The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.
Herman, Gabor T; Chen, Wei
2008-03-01
The goal of Intensity-Modulated Radiation Therapy (IMRT) is to deliver sufficient doses to tumors to kill them, but without causing irreparable damage to critical organs. This requirement can be formulated as a linear feasibility problem. The sequential (i.e., iteratively treating the constraints one after another in a cyclic fashion) algorithm ART3 is known to find a solution to such problems in a finite number of steps, provided that the feasible region is full dimensional. We present a faster algorithm called ART3+. The idea of ART3+ is to avoid unnecessary checks on constraints that are likely to be satisfied. The superior performance of the new algorithm is demonstrated by mathematical experiments inspired by the IMRT application.
Cost Cumulant-Based Control for a Class of Linear Quadratic Tracking Problems
2006-08-04
with the initial condition (t0, x0; u) ∈ [t0, tf ] × Rn × L2Ft(Ω; C([t0, tf ];Rm)) is a traditional finite-horizon IQF random cost J : [t0, tf ] × Rn...x0 , (5) y(t) = C(t)x(t) , (6) and the IQF random cost J(t0, x0; K, uext) = [z(tf )− y(tf )]T Qf [z(tf )− y(tf )] + ∫ tf t0 { [z(τ)− y(τ)]T Q(τ) [z(τ...track the prescribed signal z(t) with the finite-horizon IQF cost (7). For k ∈ Z+ fixed, the kth cost cumulant in the tracking problem is given κk(t0
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
NASA Astrophysics Data System (ADS)
Foufoula-Georgiou, Efi; Schwenk, Jon; Tejedor, Alejandro
2015-04-01
Are the dynamics of meandering rivers non-linear? What information does the shape of an oxbow lake carry about its forming process? How to characterize self-dissimilar landscapes carrying the signature of larger-scale geologic or tectonic controls? Do we have proper frameworks for quantifying the topology and dynamics of deltaic systems? What can the structural complexity of river networks (erosional and depositional) reveal about their vulnerability and response to change? Can the structure and dynamics of river networks reveal potential hotspots of geomorphic change? All of the above problems are at the heart of understanding landscape evolution, relating process to structure and form, and developing methodologies for inferring how a system might respond to future changes. We argue that a new surge of rigorous methodologies is needed to address these problems. The innovations introduced herein are: (1) gradual wavelet reconstruction for depicting threshold nonlinearity (due to cutoffs) versus inherent nonlinearity (due to underlying dynamics) in river meandering, (2) graph theory for studying the topology and dynamics of deltaic river networks and their response to change, and (3) Lagrangian approaches combined with topology and non-linear dynamics for inferring sediment-driven hotspots of geomorphic change.
NASA Astrophysics Data System (ADS)
Halpern, Paul
2017-01-01
In 1978, John Wheeler proposed the delayed-choice thought experiment as a generalization of the classic double slit experiment intended to help elucidate the nature of decision making in quantum measurement. In particular, he wished to illustrate how a decision made after a quantum system was prepared might retrospectively affect the outcome. He extended his methods to the universe itself, raising the question of whether the universe is a ``self-excited circuit'' in which scientific measurements in the present affect the quantum dynamics in the past. In this talk we'll show how Wheeler's approach revived the notion of Bohr's complementarity, which had by then faded from the prevailing discourse of quantum measurement theory. Wheeler's advocacy reflected, in part, his wish to eliminate the divide in quantum theory between measurer and what was being measured, bringing greater consistency to the ideas of Bohr, a mentor whom he deeply respected.
Media complementarity and health information seeking in Puerto Rico.
Tian, Yan; Robinson, James D
2014-01-01
This investigation incorporates the Orientation1-Stimulus-Orientation2-Response model on the antecedents and outcomes of individual-level complementarity of media use in health information seeking. A secondary analysis of the Health Information National Trends Survey Puerto Rico data suggests that education and gender were positively associated with individual-level media complementarity of health information seeking, which, in turn, was positively associated with awareness of health concepts and organizations, and this awareness was positively associated with a specific health behavior: fruit and vegetable consumption. This study extends the research in media complementarity and health information use; it provides an integrative social psychological model empirically supported by the Health Information National Trends Survey Puerto Rico data.
NASA Astrophysics Data System (ADS)
Helman, E. Udi
This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using
NASA Astrophysics Data System (ADS)
Benzaouia, Abdellah; Ouladsine, Mustapha; Ananou, Bouchra
2014-10-01
In this paper, fault tolerant control problem for discrete-time switching systems with delay is studied. Sufficient conditions of building an observer are obtained by using multiple Lyapunov function. These conditions are worked out in a new way, using cone complementarity technique, to obtain new LMIs with slack variables and multiple weighted residual matrices. The obtained results are applied on a numerical example showing fault detection, localisation of fault and reconfiguration of the control to maintain asymptotic stability even in the presence of a permanent sensor fault.
Challenges to Bohr's Wave-Particle Complementarity Principle
NASA Astrophysics Data System (ADS)
Rabinowitz, Mario
2013-02-01
Contrary to Bohr's complementarity principle, in 1995 Rabinowitz proposed that by using entangled particles from the source it would be possible to determine which slit a particle goes through while still preserving the interference pattern in the Young's two slit experiment. In 2000, Kim et al. used spontaneous parametric down conversion to prepare entangled photons as their source, and almost achieved this. In 2012, Menzel et al. experimentally succeeded in doing this. When the source emits entangled particle pairs, the traversed slit is inferred from measurement of the entangled particle's location by using triangulation. The violation of complementarity breaches the prevailing probabilistic interpretation of quantum mechanics, and benefits Bohm's pilot-wave theory.
NASA Astrophysics Data System (ADS)
Khan, Junaid Ali; Zahoor Raja, Muhammad Asif; Rashidi, Mohammad Mehdi; Syam, Muhammad Ibrahim; Majid Wazwaz, Abdul
2015-10-01
In this research, the well-known non-linear Lane-Emden-Fowler (LEF) equations are approximated by developing a nature-inspired stochastic computational intelligence algorithm. A trial solution of the model is formulated as an artificial feed-forward neural network model containing unknown adjustable parameters. From the LEF equation and its initial conditions, an energy function is constructed that is used in the algorithm for the optimisation of the networks in an unsupervised way. The proposed scheme is tested successfully by applying it on various test cases of initial value problems of LEF equations. The reliability and effectiveness of the scheme are validated through comprehensive statistical analysis. The obtained numerical results are in a good agreement with their corresponding exact solutions, which confirms the enhancement made by the proposed approach.
Blow-up rates of solutions of initial-boundary value problems for a quasi-linear parabolic equation
NASA Astrophysics Data System (ADS)
Anada, Koichi; Ishiwata, Tetsuya
2017-01-01
We consider initial-boundary value problems for a quasi linear parabolic equation, kt =k2 (kθθ + k), with zero Dirichlet boundary conditions and positive initial data. It has known that each of solutions blows up at a finite time with the rate faster than √{(T - t) - 1}. In this paper, it is proved that supθ k (θ , t) ≈√{(T - t) - 1 log log (T - t) - 1 } as t ↗ T under some assumptions. Our strategy is based on analysis for curve shortening flows that with self-crossing brought by S.B. Angenent and J.J.L. Velázquez. In addition, we prove some of numerical conjectures by Watterson which are keys to provide the blow-up rate.
NASA Astrophysics Data System (ADS)
Hadjiconstantinou, N. G.; Al-Mohssen, H. A.
2005-06-01
We investigate the time evolution of an impulsive start problem for arbitrary Knudsen numbers (Kn) using a linearized kinetic formulation. The early-time behaviour is described by a solution of the collisionless Boltzmann equation. The same solution can be used to describe the late-time behaviour for Kn ≫ 1. The late-time behaviour for Kn < 0.5 is captured by a newly proposed second-order slip model with no adjustable parameters. All theoretical results are verified by direct Monte Carlo solutions of the nonlinear Boltzmann equation. A measure of the timescale to steady state, normalized by the momentum diffusion timescale, shows that the timescale to steady state is significantly extended by ballistic transport, even at low Knudsen numbers where the latter is only important close to the system walls. This effect is captured for Kn < 0.5 by the slip model which predicts the equivalent effective domain size increase (slip length).
NASA Astrophysics Data System (ADS)
Barutello, Vivina; Jadanza, Riccardo D.; Portaluri, Alessandro
2016-01-01
It is well known that the linear stability of the Lagrangian elliptic solutions in the classical planar three-body problem depends on a mass parameter β and on the eccentricity e of the orbit. We consider only the circular case ( e = 0) but under the action of a broader family of singular potentials: α-homogeneous potentials, for α in (0, 2), and the logarithmic one. It turns out indeed that the Lagrangian circular orbit persists also in this more general setting. We discover a region of linear stability expressed in terms of the homogeneity parameter α and the mass parameter β, then we compute the Morse index of this orbit and of its iterates and we find that the boundary of the stability region is the envelope of a family of curves on which the Morse indices of the iterates jump. In order to conduct our analysis we rely on a Maslov-type index theory devised and developed by Y. Long, X. Hu and S. Sun; a key role is played by an appropriate index theorem and by some precise computations of suitable Maslov-type indices.
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Paraschivoiu, Marius
1998-01-01
We present a finite element technique for the efficient generation of lower and upper bounds to outputs which are linear functionals of the solutions to the incompressible Stokes equations in two space dimensions; the finite element discretization is effected by Crouzeix-Raviart elements, the discontinuous pressure approximation of which is central to our approach. The bounds are based upon the construction of an augmented Lagrangian: the objective is a quadratic "energy" reformulation of the desired output; the constraints are the finite element equilibrium equations (including the incompressibility constraint), and the intersubdomain continuity conditions on velocity. Appeal to the dual max-min problem for appropriately chosen candidate Lagrange multipliers then yields inexpensive bounds for the output associated with a fine-mesh discretization; the Lagrange multipliers are generated by exploiting an associated coarse-mesh approximation. In addition to the requisite coarse-mesh calculations, the bound technique requires solution only of local subdomain Stokes problems on the fine-mesh. The method is illustrated for the Stokes equations, in which the outputs of interest are the flowrate past, and the lift force on, a body immersed in a channel.
Jan Hesthaven
2012-02-06
Final report for DOE Contract DE-FG02-98ER25346 entitled Parallel High Order Accuracy Methods Applied to Non-Linear Hyperbolic Equations and to Problems in Materials Sciences. Principal Investigator Jan S. Hesthaven Division of Applied Mathematics Brown University, Box F Providence, RI 02912 Jan.Hesthaven@Brown.edu February 6, 2012 Note: This grant was originally awarded to Professor David Gottlieb and the majority of the work envisioned reflects his original ideas. However, when Prof Gottlieb passed away in December 2008, Professor Hesthaven took over as PI to ensure proper mentoring of students and postdoctoral researchers already involved in the project. This unusual circumstance has naturally impacted the project and its timeline. However, as the report reflects, the planned work has been accomplished and some activities beyond the original scope have been pursued with success. Project overview and main results The effort in this project focuses on the development of high order accurate computational methods for the solution of hyperbolic equations with application to problems with strong shocks. While the methods are general, emphasis is on applications to gas dynamics with strong shocks.
NASA Technical Reports Server (NTRS)
Lee, Y. M.
1971-01-01
Using a linearized theory of thermally and mechanically interacting mixture of linear elastic solid and viscous fluid, we derive a fundamental relation in an integral form called a reciprocity relation. This reciprocity relation relates the solution of one initial-boundary value problem with a given set of initial and boundary data to the solution of a second initial-boundary value problem corresponding to a different initial and boundary data for a given interacting mixture. From this general integral relation, reciprocity relations are derived for a heat-conducting linear elastic solid, and for a heat-conducting viscous fluid. An initial-boundary value problem is posed and solved for the mixture of linear elastic solid and viscous fluid. With the aid of the Laplace transform and the contour integration, a real integral representation for the displacement of the solid constituent is obtained as one of the principal results of the analysis.
On the complementarity of ECD and VCD techniques.
Nicu, Valentin Paul; Mándi, Attila; Kurtán, Tibor; Polavarapu, Prasad L
2014-09-01
An unprecedented complementarity of electronic circular dichroism (ECD) and vibrational circular dichroism (VCD) spectroscopic techniques is demonstrated by showing that each technique reveals the structure of a different molecular segment. Using a flexible molecule of biological significance we show that the synergetic use of ECD and VCD yields more complete structural characterization as it provides improved and more reliable conformer resolution.
Generalized uncertainty principle: implications for black hole complementarity
NASA Astrophysics Data System (ADS)
Chen, Pisin; Ong, Yen Chin; Yeom, Dong-han
2014-12-01
At the heart of the black hole information loss paradox and the firewall controversy lies the conflict between quantum mechanics and general relativity. Much has been said about quantum corrections to general relativity, but much less in the opposite direction. It is therefore crucial to examine possible corrections to quantum mechanics due to gravity. Indeed, the Heisenberg Uncertainty Principle is one profound feature of quantum mechanics, which nevertheless may receive correction when gravitational effects become important. Such generalized uncertainty principle [GUP] has been motivated from not only quite general considerations of quantum mechanics and gravity, but also string theoretic arguments. We examine the role of GUP in the context of black hole complementarity. We find that while complementarity can be violated by large N rescaling if one assumes only the Heisenberg's Uncertainty Principle, the application of GUP may save complementarity, but only if certain N -dependence is also assumed. This raises two important questions beyond the scope of this work, i.e., whether GUP really has the proposed form of N -dependence, and whether black hole complementarity is indeed correct.
Addona, Davide
2015-08-15
We obtain weighted uniform estimates for the gradient of the solutions to a class of linear parabolic Cauchy problems with unbounded coefficients. Such estimates are then used to prove existence and uniqueness of the mild solution to a semi-linear backward parabolic Cauchy problem, where the differential equation is the Hamilton–Jacobi–Bellman equation of a suitable optimal control problem. Via backward stochastic differential equations, we show that the mild solution is indeed the value function of the controlled equation and that the feedback law is verified.
Akcelik, Volkan; Flath, Pearl; Ghattas, Omar; Hill, Judith C; Van Bloemen Waanders, Bart; Wilcox, Lucas
2011-01-01
We consider the problem of estimating the uncertainty in large-scale linear statistical inverse problems with high-dimensional parameter spaces within the framework of Bayesian inference. When the noise and prior probability densities are Gaussian, the solution to the inverse problem is also Gaussian, and is thus characterized by the mean and covariance matrix of the posterior probability density. Unfortunately, explicitly computing the posterior covariance matrix requires as many forward solutions as there are parameters, and is thus prohibitive when the forward problem is expensive and the parameter dimension is large. However, for many ill-posed inverse problems, the Hessian matrix of the data misfit term has a spectrum that collapses rapidly to zero. We present a fast method for computation of an approximation to the posterior covariance that exploits the lowrank structure of the preconditioned (by the prior covariance) Hessian of the data misfit. Analysis of an infinite-dimensional model convection-diffusion problem, and numerical experiments on large-scale 3D convection-diffusion inverse problems with up to 1.5 million parameters, demonstrate that the number of forward PDE solves required for an accurate low-rank approximation is independent of the problem dimension. This permits scalable estimation of the uncertainty in large-scale ill-posed linear inverse problems at a small multiple (independent of the problem dimension) of the cost of solving the forward problem.
NASA Technical Reports Server (NTRS)
Hall, Philip
1989-01-01
Goertler vortices are thought to be the cause of transition in many fluid flows of practical importance. A review of the different stages of vortex growth is given. In the linear regime, nonparallel effects completely govern this growth, and parallel flow theories do not capture the essential features of the development of the vortices. A detailed comparison between the parallel and nonparallel theories is given and it is shown that at small vortex wavelengths, the parallel flow theories have some validity; otherwise nonparallel effects are dominant. New results for the receptivity problem for Goertler vortices are given; in particular vortices induced by free stream perturbations impinging on the leading edge of the walls are considered. It is found that the most dangerous mode of this type can be isolated and it's neutral curve is determined. This curve agrees very closely with the available experimental data. A discussion of the different regimes of growth of nonlinear vortices is also given. Again it is shown that, unless the vortex wavelength is small, nonparallel effects are dominant. Some new results for nonlinear vortices of 0(1) wavelengths are given and compared to experimental observations.
NASA Astrophysics Data System (ADS)
Hawthorne, Bryant; Panchal, Jitesh H.
2014-07-01
A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.
The complementarity relations of quantum coherence in quantum information processing.
Pan, Fei; Qiu, Liang; Liu, Zhi
2017-03-08
We establish two complementarity relations for the relative entropy of coherence in quantum information processing, i.e., quantum dense coding and teleportation. We first give an uncertainty-like expression relating local quantum coherence to the capacity of optimal dense coding for bipartite system. The relation can also be applied to the case of dense coding by using unital memoryless noisy quantum channels. Further, the relation between local quantum coherence and teleportation fidelity for two-qubit system is given.
The complementarity relations of quantum coherence in quantum information processing
NASA Astrophysics Data System (ADS)
Pan, Fei; Qiu, Liang; Liu, Zhi
2017-03-01
We establish two complementarity relations for the relative entropy of coherence in quantum information processing, i.e., quantum dense coding and teleportation. We first give an uncertainty-like expression relating local quantum coherence to the capacity of optimal dense coding for bipartite system. The relation can also be applied to the case of dense coding by using unital memoryless noisy quantum channels. Further, the relation between local quantum coherence and teleportation fidelity for two-qubit system is given.
The complementarity relations of quantum coherence in quantum information processing
Pan, Fei; Qiu, Liang; Liu, Zhi
2017-01-01
We establish two complementarity relations for the relative entropy of coherence in quantum information processing, i.e., quantum dense coding and teleportation. We first give an uncertainty-like expression relating local quantum coherence to the capacity of optimal dense coding for bipartite system. The relation can also be applied to the case of dense coding by using unital memoryless noisy quantum channels. Further, the relation between local quantum coherence and teleportation fidelity for two-qubit system is given. PMID:28272481
Sexual complementarity between host humoral toxicity and soldier caste in a polyembryonic wasp
Uka, Daisuke; Sakamoto, Takuma; Yoshimura, Jin; Iwabuchi, Kikuo
2016-01-01
Defense against enemies is a type of natural selection considered fundamentally equivalent between the sexes. In reality, however, whether males and females differ in defense strategy is unknown. Multiparasitism necessarily leads to the problem of defense for a parasite (parasitoid). The polyembryonic parasitic wasp Copidosoma floridanum is famous for its larval soldiers’ ability to kill other parasites. This wasp also exhibits sexual differences not only with regard to the competitive ability of the soldier caste but also with regard to host immune enhancement. Female soldiers are more aggressive than male soldiers, and their numbers increase upon invasion of the host by other parasites. In this report, in vivo and in vitro competition assays were used to test whether females have a toxic humoral factor; if so, then its strength was compared with that of males. We found that females have a toxic factor that is much weaker than that of males. Our results imply sexual complementarity between host humoral toxicity and larval soldiers. We discuss how this sexual complementarity guarantees adaptive advantages for both males and females despite the one-sided killing of male reproductives by larval female soldiers in a mixed-sex brood. PMID:27385149
On reductibility of degenerate optimization problems to regular operator equations
NASA Astrophysics Data System (ADS)
Bednarczuk, E. M.; Tretyakov, A. A.
2016-12-01
We present an application of the p-regularity theory to the analysis of non-regular (irregular, degenerate) nonlinear optimization problems. The p-regularity theory, also known as the p-factor analysis of nonlinear mappings, was developed during last thirty years. The p-factor analysis is based on the construction of the p-factor operator which allows us to analyze optimization problems in the degenerate case. We investigate reducibility of a non-regular optimization problem to a regular system of equations which do not depend on the objective function. As an illustration we consider applications of our results to non-regular complementarity problems of mathematical programming and to linear programming problems.
ERIC Educational Resources Information Center
Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice
2015-01-01
The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…
Meyer, J C; Needham, D J
2015-03-08
In this paper, we examine a semi-linear parabolic Cauchy problem with non-Lipschitz nonlinearity which arises as a generic form in a significant number of applications. Specifically, we obtain a well-posedness result and examine the qualitative structure of the solution in detail. The standard classical approach to establishing well-posedness is precluded owing to the lack of Lipschitz continuity for the nonlinearity. Here, existence and uniqueness of solutions is established via the recently developed generic approach to this class of problem (Meyer & Needham 2015 The Cauchy problem for non-Lipschitz semi-linear parabolic partial differential equations. London Mathematical Society Lecture Note Series, vol. 419) which examines the difference of the maximal and minimal solutions to the problem. From this uniqueness result, the approach of Meyer & Needham allows for development of a comparison result which is then used to exhibit global continuous dependence of solutions to the problem on a suitable initial dataset. The comparison and continuous dependence results obtained here are novel to this class of problem. This class of problem arises specifically in the study of a one-step autocatalytic reaction, which is schematically given by A→B at rate a(p)b(q) (where a and b are the concentrations of A and B, respectively, with 0
problem has been lacking up to the present.
ERIC Educational Resources Information Center
Strickland, Tricia K.; Maccini, Paula
2013-01-01
We examined the effects of the Concrete-Representational-Abstract Integration strategy on the ability of secondary students with learning disabilities to multiply linear algebraic expressions embedded within contextualized area problems. A multiple-probe design across three participants was used. Results indicated that the integration of the…
NASA Technical Reports Server (NTRS)
Utku, S.
1969-01-01
A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
1993-01-01
The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.
Saptio-temporal complementarity of wind and solar power in India
NASA Astrophysics Data System (ADS)
Lolla, Savita; Baidya Roy, Somnath; Chowdhury, Sourangshu
2015-04-01
Wind and solar power are likely to be a part of the solution to the climate change problem. That is why they feature prominently in the energy policies of all industrial economies including India. One of the major hindrances that is preventing an explosive growth of wind and solar energy is the issue of intermittency. This is a major problem because in a rapidly moving economy, energy production must match the patterns of energy demand. Moreover, sudden increase and decrease in energy supply may destabilize the power grids leading to disruptions in power supply. In this work we explore if the patterns of variability in wind and solar energy availability can offset each other so that a constant supply can be guaranteed. As a first step, this work focuses on seasonal-scale variability for each of the 5 regional power transmission grids in India. Communication within each grid is better than communication between grids. Hence, it is assumed that the grids can switch sources relatively easily. Wind and solar resources are estimated using the MERRA Reanalysis data for the 1979-2013 period. Solar resources are calculated with a 20% conversion efficiency. Wind resources are estimated using a 2 MW turbine power curve. Total resources are obtained by optimizing location and number of wind/solar energy farms. Preliminary results show that the southern and western grids are more appropriate for cogeneration than the other grids. Many studies on wind-solar cogeneration have focused on temporal complementarity at local scale. However, this is one of the first studies to explore spatial complementarity over regional scales. This project may help accelerate renewable energy penetration in India by identifying regional grid(s) where the renewable energy intermittency problem can be minimized.
NASA Astrophysics Data System (ADS)
Tanemura, M.; Chida, Y.
2016-09-01
There are a lot of design problems of control system which are expressed as a performance index minimization under BMI conditions. However, a minimization problem expressed as LMIs can be easily solved because of the convex property of LMIs. Therefore, many researchers have been studying transforming a variety of control design problems into convex minimization problems expressed as LMIs. This paper proposes an LMI method for a quadratic performance index minimization problem with a class of BMI conditions. The minimization problem treated in this paper includes design problems of state-feedback gain for switched system and so on. The effectiveness of the proposed method is verified through a state-feedback gain design for switched systems and a numerical simulation using the designed feedback gains.
NASA Technical Reports Server (NTRS)
Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.
1973-01-01
High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.
Further Development in the Global Resolution of Convex Programs with Complementarity Constraints
2014-04-09
discuss various methods to tighten the relaxation by exploiting complementarity, with the aim of constructing better approximations to the convex hull of...AFRL-OSR-VA-TR-2014-0126 Global Resolution of Convex Programs with Complementarity Constraints Angelia Nedich UNIVERSITY OF ILLINOIS Final Report 04...Development in the Global Resolution of Convex Programs with Complementarity Constraints 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT
Reinforcement learning in complementarity game and population dynamics.
Jost, Jürgen; Li, Wei
2014-02-01
We systematically test and compare different reinforcement learning schemes in a complementarity game [J. Jost and W. Li, Physica A 345, 245 (2005)] played between members of two populations. More precisely, we study the Roth-Erev, Bush-Mosteller, and SoftMax reinforcement learning schemes. A modified version of Roth-Erev with a power exponent of 1.5, as opposed to 1 in the standard version, performs best. We also compare these reinforcement learning strategies with evolutionary schemes. This gives insight into aspects like the issue of quick adaptation as opposed to systematic exploration or the role of learning rates.
Complementarity of the maldacena and randall-sundrum pictures
Duff; Liu
2000-09-04
We revive an old result, that one-loop corrections to the graviton propagator induce 1/r(3) corrections to the Newtonian gravitational potential, and compute the coefficient due to closed loops of the U(N) N = 4 super-Yang-Mills theory that arises in Maldacena's anti-de Sitter conformal field theory correspondence. We find exact agreement with the coefficient appearing in the Randall-Sundrum brane-world proposal. This provides more evidence for the complementarity of the two pictures.
NASA Astrophysics Data System (ADS)
Hadrava, M.; Feistauer, M.; Horáček, J.; Kosík, A.
2013-10-01
The paper is concerned with the numerical solution of static and dynamic elasticity problems. The purpose of this subject is the computation of the so-called ALE mapping (representing the mesh deformation) in the solution of flow in time-dependent domains and the computation of the time-dependent deformation of an elastic body. These two problems represent important ingredients in the fluid-structure interaction (FSI). They are discretized by the discontinuous Galerkin method (DGM). Here we describe the method and present some test problems. The developed method is applied to the FSI problem treated in [2].
ERIC Educational Resources Information Center
Mills, James W.; And Others
1973-01-01
The Study reported here tested an application of the Linear Programming Model at the Reading Clinic of Drew University. Results, while not conclusive, indicate that this approach yields greater gains in speed scores than a traditional approach for this population. (Author)
NASA Astrophysics Data System (ADS)
Lozhnikov, D. A.
2012-03-01
S. Yu. Dobrokhotov, B. Tirozzi, S. Ya. Sekerzh-Zenkovich, A. I. Shafarevich, and their co-authors suggested new effective asymptotic formulas for solving a Cauchy problem with localized initial data for multidimensional linear hyperbolic equations with variable coefficients and, in particular, for a linearized system of shallow-water equations over an uneven bottom in their cycle of papers. The solutions are localized in a neighborhood of fronts on which focal points and self-intersection points (singular points) occur in the course of time, due to the variability of the coefficients. In the present paper, a numerical realization of asymptotic formulas in a neighborhood of singular points of fronts is presented in the case of the system of shallow-water equations, gluing problems for these formulas together with formulas for regular domains are discussed, and also a comparison of asymptotic solutions with solutions obtained by immediate numerical computations is carried out.
Autoantigen complementarity and its contributions to hallmarks of autoimmune disease.
Pendergraft, William F; Badhwar, Anshul K; Preston, Gloria A
2015-06-21
The question considered is, "What causes the autoimmune response to begin and what causes it to worsen into autoimmune disease?" The theory of autoantigen complementarity posits that the initiating immunogen causing disease is a protein complementary (antisense) to the self-antigen, rather than a response to the native protein. The resulting primary antibody elicits an anti-antibody response or anti-idiotype, consequently producing a disease-inciting autoantibody. Yet, not everyone who developes self-reactive autoantibodies will manifest autoimmune disease. What is apparent is that manifestation of disease is governed by the acquisition of multiple immune-compromising traits that increase susceptibility and drive disease. Taking into account current cellular, molecular, and genetic information, six traits, or 'hallmarks', of autoimmune disease were proposed: (1) Autoreactive cells evade deletion, (2) Presence of asymptomatic autoantibodies, (3) Hyperactivity of Fc-FcR pathway, (4) Susceptibility to environmental impact, (5) Antigenic modifications of self-proteins, (6) Microbial Infections. Presented here is a discussion on how components delineated in the theory of autoantigen complementarity potentially promote the acquisition of multiple 'hallmarks' of disease.
Indivisibility, Complementarity and Ontology: A Bohrian Interpretation of Quantum Mechanics
NASA Astrophysics Data System (ADS)
Roldán-Charria, Jairo
2014-12-01
The interpretation of quantum mechanics presented in this paper is inspired by two ideas that are fundamental in Bohr's writings: indivisibility and complementarity. Further basic assumptions of the proposed interpretation are completeness, universality and conceptual economy. In the interpretation, decoherence plays a fundamental role for the understanding of measurement. A general and precise conception of complementarity is proposed. It is fundamental in this interpretation to make a distinction between ontological reality, constituted by everything that does not depend at all on the collectivity of human beings, nor on their decisions or limitations, nor on their existence, and empirical reality constituted by everything that not being ontological is, however, intersubjective. According to the proposed interpretation, neither the dynamical properties, nor the constitutive properties of microsystems like mass, charge and spin, are ontological. The properties of macroscopic systems and space-time are also considered to belong to empirical reality. The acceptance of the above mentioned conclusion does not imply a total rejection of the notion of ontological reality. In the paper, utilizing the Aristotelian ideas of general cause and potentiality, a relation between ontological reality and empirical reality is proposed. Some glimpses of ontological reality, in the form of what can be said about it, are finally presented.
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
NASA Astrophysics Data System (ADS)
Turkin, Alexander; van Oijen, Antoine M.; Turkin, Anatoliy A.
2015-11-01
One-dimensional sliding along DNA as a means to accelerate protein target search is a well-known phenomenon occurring in various biological systems. Using a biomimetic approach, we have recently demonstrated the practical use of DNA-sliding peptides to speed up bimolecular reactions more than an order of magnitude by allowing the reactants to associate not only in the solution by three-dimensional (3D) diffusion, but also on DNA via one-dimensional (1D) diffusion [A. Turkin et al., Chem. Sci. (2015), 10.1039/C5SC03063C]. Here we present a mean-field kinetic model of a bimolecular reaction in a solution with linear extended sinks (e.g., DNA) that can intermittently trap molecules present in a solution. The model consists of chemical rate equations for mean concentrations of reacting species. Our model demonstrates that addition of linear traps to the solution can significantly accelerate reactant association. We show that at optimum concentrations of linear traps the 1D reaction pathway dominates in the kinetics of the bimolecular reaction; i.e., these 1D traps function as an assembly line of the reaction product. Moreover, we show that the association reaction on linear sinks between trapped reactants exhibits a nonclassical third-order behavior. Predictions of the model agree well with our experimental observations. Our model provides a general description of bimolecular reactions that are controlled by a combined 3D+1D mechanism and can be used to quantitatively describe both naturally occurring as well as biomimetic biochemical systems that reduce the dimensionality of search.
Interpersonal Complementarity in the Mental Health Intake: A Mixed-Methods Study
ERIC Educational Resources Information Center
Rosen, Daniel C.; Miller, Alisa B.; Nakash, Ora; Halperin, Lucila; Alegria, Margarita
2012-01-01
The study examined which socio-demographic differences between clients and providers influenced interpersonal complementarity during an initial intake session; that is, behaviors that facilitate harmonious interactions between client and provider. Complementarity was assessed using blinded ratings of 114 videotaped intake sessions by trained…
ERIC Educational Resources Information Center
Laird, Heather; Vande Kemp, Hendrika
1987-01-01
Explored the level of family therapist complementarity in the early, middle and late stages of therapy performing a micro-analysis of Salvador Minuchin with one family in successful therapy. Level of therapist complementarity was signficantly greater in the early and late stages than in the middle stage, and was significantly correlated with…
ERIC Educational Resources Information Center
Rothe, J. Peter
This article focuses on the linkage between the quantitative and qualitative distance education research methods. The concept that serves as the conceptual link is termed "complementarity." The definition of complementarity emerges through a simulated study of FernUniversitat's mentors. The study shows that in the case of the mentors,…
NASA Astrophysics Data System (ADS)
Tanaka, Hidefumi; Yamamoto, Yuhji
2016-05-01
Palaeointensity experiments were carried out to a sample collection from two sections of basalt lava flow sequences of Pliocene age in north central Iceland (Chron C2An) to further refine the knowledge of the behaviour of the palaeomagnetic field. Selection of samples was mainly based on their stability of remanence to thermal demagnetization as well as good reversibility in variations of magnetic susceptibility and saturation magnetization with temperature, which would indicate the presence of magnetite as a product of deuteric oxidation of titanomagnetite. Among 167 lava flows from two sections, 44 flows were selected for the Königsberger-Thellier-Thellier experiment in vacuum. In spite of careful pre-selection of samples, an Arai plot with two linear segments, or a concave-up appearance, was often encountered during the experiments. This non-ideal behaviour was probably caused by an irreversible change in the domain state of the magnetic grains of the pseudo-single-domain (PSD) range. This is assumed because an ideal linear plot was obtained in the second run of the palaeointensity experiment in which a laboratory thermoremanence acquired after the final step of the first run was used as a natural remanence. This experiment was conducted on six selected samples, and no clear difference between the magnetic grains of the experimented and pristine sister samples was found by scanning electron microscope and hysteresis measurements, that is, no occurrence of notable chemical/mineralogical alteration, suggesting that no change in the grain size distribution had occurred. Hence, the two-segment Arai plot was not caused by the reversible multidomain/PSD effect in which the curvature of the Arai plot is dependent on the grain size. Considering that the irreversible change in domain state must have affected data points at not only high temperatures but also low temperatures, fv ≥ 0.5 was adopted as one of the acceptance criteria where fv is a vectorially defined
Complementarity in Spontaneous Emission: Quantum Jumps, Staggers and Slides
NASA Astrophysics Data System (ADS)
Wiseman, H.
Dan Walls is rightly famous for his part in many of the outstanding developments in quantum optics in the last 30 years. Two of these are most relevant to this paper. The first is the prediction of nonclassical properties of the fluorescence of a two-level atom, such as antibunching [1] and squeezing [2]. Both of these predictions have now been verified experimentally [3,4]. The second is the investigation of fundamental issues such as complementarity and the uncertainty principle [5,6]. This latter area is one which has generated a lively theoretical discussion [7], and, more importantly, suggested new experiments [8]. It was also an area in which I had the honour of working with Dan [9], and of gaining the benefit of his instinct for picking a fruitful line of investigation.
Reinforcement learning in complementarity game and population dynamics
NASA Astrophysics Data System (ADS)
Jost, Jürgen; Li, Wei
2014-02-01
We systematically test and compare different reinforcement learning schemes in a complementarity game [J. Jost and W. Li, Physica A 345, 245 (2005), 10.1016/j.physa.2004.07.005] played between members of two populations. More precisely, we study the Roth-Erev, Bush-Mosteller, and SoftMax reinforcement learning schemes. A modified version of Roth-Erev with a power exponent of 1.5, as opposed to 1 in the standard version, performs best. We also compare these reinforcement learning strategies with evolutionary schemes. This gives insight into aspects like the issue of quick adaptation as opposed to systematic exploration or the role of learning rates.
Complementarity of Neutrinoless Double Beta Decay and Cosmology
Dodelson, Scott; Lykken, Joseph
2014-03-20
Neutrinoless double beta decay experiments constrain one combination of neutrino parameters, while cosmic surveys constrain another. This complementarity opens up an exciting range of possibilities. If neutrinos are Majorana particles, and the neutrino masses follow an inverted hierarchy, then the upcoming sets of both experiments will detect signals. The combined constraints will pin down not only the neutrino masses but also constrain one of the Majorana phases. If the hierarchy is normal, then a beta decay detection with the upcoming generation of experiments is unlikely, but cosmic surveys could constrain the sum of the masses to be relatively heavy, thereby producing a lower bound for the neutrinoless double beta decay rate, and therefore an argument for a next generation beta decay experiment. In this case as well, a combination of the phases will be constrained.
Phenomenology and the life sciences: Clarifications and complementarities.
Sheets-Johnstone, Maxine
2015-12-01
This paper first clarifies phenomenology in ways essential to demonstrating its basic concern with Nature and its recognition of individual and cultural differences as well as commonalities. It furthermore clarifies phenomenological methodology in ways essential to understanding the methodology itself, its purpose, and its consequences. These clarifications show how phenomenology, by hewing to the dynamic realities of life itself and experiences of life itself, counters reductive thinking and "embodiments" of one kind and another. On the basis of these clarifications, the paper then turns to detailing conceptual complementarities between phenomenology and the life sciences, particularly highlighting studies in coordination dynamics. In doing so, it brings to light fundamental relationships such as those between mind and motion and between intrinsic dynamics and primal animation. It furthermore highlights the common concern with origins in both phenomenology and evolutionary biology: the history of how what is present is related to its inception in the past and to its transformations from past to present.
Complementarity of information and the emergence of the classical world
NASA Astrophysics Data System (ADS)
Zwolak, Michael; Zurek, Wojciech
2013-03-01
We prove an anti-symmetry property relating accessible information about a system through some auxiliary system F and the quantum discord with respect to a complementary system F'. In Quantum Darwinism, where fragments of the environment relay information to observers - this relation allows us to understand some fundamental properties regarding correlations between a quantum system and its environment. First, it relies on a natural separation of accessible information and quantum information about a system. Under decoherence, this separation shows that accessible information is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. Second, It shows that objective information becomes accessible to many observers only when quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. The resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality, and supports Bohr's intuition that quantum phenomena acquire classical reality only when communicated.
Complementarity of quantum discord and classically accessible information
Zwolak, Michael P.; Zurek, Wojciech H.
2013-05-20
The sum of the Holevo quantity (that bounds the capacity of quantum channels to transmit classical information about an observable) and the quantum discord (a measure of the quantumness of correlations of that observable) yields an observable-independent total given by the quantum mutual information. This split naturally delineates information about quantum systems accessible to observers – information that is redundantly transmitted by the environment – while showing that it is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. In addition, we prove an anti-symmetry property relating accessible information and discord. It shows that information becomes objective – accessible to many observers – only as quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. Lastly, the resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality while flagrantly quantum superpositions are out of reach.
Complementarity of quantum discord and classically accessible information
Zwolak, Michael P.; Zurek, Wojciech H.
2013-05-20
The sum of the Holevo quantity (that bounds the capacity of quantum channels to transmit classical information about an observable) and the quantum discord (a measure of the quantumness of correlations of that observable) yields an observable-independent total given by the quantum mutual information. This split naturally delineates information about quantum systems accessible to observers – information that is redundantly transmitted by the environment – while showing that it is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. In addition, we prove an anti-symmetry property relating accessible information and discord. Itmore » shows that information becomes objective – accessible to many observers – only as quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. Lastly, the resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality while flagrantly quantum superpositions are out of reach.« less
Bayesian Inference for Duplication–Mutation with Complementarity Network Models
Persing, Adam; Beskos, Alexandros; Heine, Kari; De Iorio, Maria
2015-01-01
Abstract We observe an undirected graph G without multiple edges and self-loops, which is to represent a protein–protein interaction (PPI) network. We assume that G evolved under the duplication–mutation with complementarity (DMC) model from a seed graph, G0, and we also observe the binary forest Γ that represents the duplication history of G. A posterior density for the DMC model parameters is established, and we outline a sampling strategy by which one can perform Bayesian inference; that sampling strategy employs a particle marginal Metropolis–Hastings (PMMH) algorithm. We test our methodology on numerical examples to demonstrate a high accuracy and precision in the inference of the DMC model's mutation and homodimerization parameters. PMID:26355682
NASA Technical Reports Server (NTRS)
Wong, P. K.
1975-01-01
The closely-related problems of designing reliable feedback stabilization strategy and coordinating decentralized feedbacks are considered. Two approaches are taken. A geometric characterization of the structure of control interaction (and its dual) was first attempted and a concept of structural homomorphism developed based on the idea of 'similarity' of interaction pattern. The idea of finding classes of individual feedback maps that do not 'interfere' with the stabilizing action of each other was developed by identifying the structural properties of nondestabilizing and LQ-optimal feedback maps. Some known stability properties of LQ-feedback were generalized and some partial solutions were provided to the reliable stabilization and decentralized feedback coordination problems. A concept of coordination parametrization was introduced, and a scheme for classifying different modes of decentralization (information, control law computation, on-line control implementation) in control systems was developed.
Complementarity of genotoxic and nongenotoxic predictors of rodent carcinogenicity.
Kitchin, K T; Brown, J L; Kulkarni, A P
1994-01-01
Twenty-one chemicals carcinogenic in rodent bioassays were selected for study. The chemicals were administered by gavage in two dose levels to female Sprague-Dawley rats. The effects of these 21 chemicals on four biochemical assays [hepatic DNA damage by alkaline elution (DD), hepatic ornithine decarboxylase activity (ODC), serum alanine aminotransferase activity (ALT), and hepatic cytochrome P-450 content (P450)] were determined. Available data from seven cancer predictors published by others [the Ames test (AMES), mutation in Salmonella typhimurium TA 1537 (TA 1537), structural alerts (SA), mutation in mouse lymphoma cells (MOLY), chromosomal aberrations in Chinese hamster ovary cells (ABS), sister chromatid exchange in hamster ovary cells (SCE), and the ke test (ke)] were also compiled for these 21 chemical carcinogens plus 28 carcinogens and 62 noncarcinogens already published by our laboratory. From the resulting 111 (chemicals) by 11 (individual cancer predictors) data matrix, the five operational characteristics (sensitivity, specificity, positive predictivity, negative predictivity, and concordance) of each of the 11 individual cancer predictors (four biochemical parameters of this study and seven cancer predictors of others) are presented. Two examples of complementarity or synergy of composite cancer predictors were found. To obtain maximum concordance it was necessary to combine both genotoxic and nongenotoxic cancer predictors. The composite cancer predictor (DD or [ODC and P450] or [ODC and ALT]) had higher concordance than did any of the four individual cancer predictors from which it was constructed. Similarly, the composite cancer predictor (TA 1537 or DD or [ODC and P450] or [ODC and ALT]) had higher concordance than any of its five individual constituent cancer predictors. Complementarity or synergy has been demonstrated both 1) among genotoxic cancer predictors (DD and TA 1537) and 2) between nongenotoxic (ODC, P450, and ALT) and genotoxic cancer
1980-05-31
Multiconstraint Zero - One Knapsack Problem ," The Journal of the Operational Research Society, Vol. 30, 1979, pp. 369-378. 69 [41] Kepler, C...programming. Shih [401 has written on a branch and bound method , Kepler and Blackman [41] have demonstrated the use of dynamic programming in the selection of...Portfolio Selection Model," IEEE A. Transactions on Engineering Management, Vol. EM-26, No. 1, 1979, pp. 2-7. [40] Shih, Wei, "A Branch and
NASA Astrophysics Data System (ADS)
Updike, Clark A.; Greeley, Scott W.; King, James A.
1998-10-01
In the process of designing a control actuator for a vibration cancellation system demonstration on a large, precision optical testbed, it was discovered that the support struts on which the control actuators attach could not be disassembled. This led to the development of a Linear Precision ACTuator (LPACT) with a novel two piece design which could be clamped around the strut in-situ. The design requirements, LPACT characteristics, and LPACT test results are fully described and contrasted with other earlier LPACT designs. Cancellation system performance results are presented for a 3 tone disturbance case. Excellent results, on the order of 40 dB of attenuation per tone (down to the noise floor on two disturbances), are achieved using an Adaptive Neural Controller (ANC).
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas
2016-11-01
Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
NASA Technical Reports Server (NTRS)
Bensoussan, A.; Delfour, M. C.; Mitter, S. K.
1976-01-01
Available published results are surveyed for a special class of infinite-dimensional control systems whose evolution is characterized by a semigroup of operators of class C subscript zero. Emphasis is placed on an approach that clarifies the system-theoretic relationship among controllability, stabilizability, stability, and the existence of a solution to an associated operator equation of the Riccati type. Formulation of the optimal control problem is reviewed along with the asymptotic behavior of solutions to a general system of equations and several theorems concerning L2 stability. Examples are briefly discussed which involve second-order parabolic systems, first-order hyperbolic systems, and distributed boundary control.
Cameron, M.K.; Fomel, S.B.; Sethian, J.A.
2009-01-01
In the present work we derive and study a nonlinear elliptic PDE coming from the problem of estimation of sound speed inside the Earth. The physical setting of the PDE allows us to pose only a Cauchy problem, and hence is ill-posed. However we are still able to solve it numerically on a long enough time interval to be of practical use. We used two approaches. The first approach is a finite difference time-marching numerical scheme inspired by the Lax-Friedrichs method. The key features of this scheme is the Lax-Friedrichs averaging and the wide stencil in space. The second approach is a spectral Chebyshev method with truncated series. We show that our schemes work because of (1) the special input corresponding to a positive finite seismic velocity, (2) special initial conditions corresponding to the image rays, (3) the fact that our finite-difference scheme contains small error terms which damp the high harmonics; truncation of the Chebyshev series, and (4) the need to compute the solution only for a short interval of time. We test our numerical scheme on a collection of analytic examples and demonstrate a dramatic improvement in accuracy in the estimation of the sound speed inside the Earth in comparison with the conventional Dix inversion. Our test on the Marmousi example confirms the effectiveness of the proposed approach.
NASA Astrophysics Data System (ADS)
Chen, De-Han; Hofmann, Bernd; Zou, Jun
2017-01-01
We consider the ill-posed operator equation Ax = y with an injective and bounded linear operator A mapping between {{\\ell}2} and a Hilbert space Y, possessing the unique solution {{x}\\dagger}=≤ft\\{{{x}\\dagger}k\\right\\}k=1∞ . For the cases that sparsity {{x}\\dagger}\\in {{\\ell}0} is expected but often slightly violated in practice, we investigate in comparison with the {{\\ell}1} -regularization the elastic-net regularization, where the penalty is a weighted superposition of the {{\\ell}1} -norm and the {{\\ell}2} -norm square, under the assumption that {{x}\\dagger}\\in {{\\ell}1} . There occur two positive parameters in this approach, the weight parameter η and the regularization parameter as the multiplier of the whole penalty in the Tikhonov functional, whereas only one regularization parameter arises in {{\\ell}1} -regularization. Based on the variational inequality approach for the description of the solution smoothness with respect to the forward operator A and exploiting the method of approximate source conditions, we present some results to estimate the rate of convergence for the elastic-net regularization. The occurring rate function contains the rate of the decay {{x}\\dagger}k\\to 0 for k\\to ∞ and the classical smoothness properties of {{x}\\dagger} as an element in {{\\ell}2} .
Complementarity of dark matter searches in the phenomenological MSSM
Cahill-Rowley, Matthew; Cotta, Randy; Drlica-Wagner, Alex; Funk, Stefan; Hewett, JoAnne; Ismail, Ahmed; Rizzo, Tom; Wood, Matthew
2015-03-11
As is well known, the search for and eventual identification of dark matter in supersymmetry requires a simultaneous, multipronged approach with important roles played by the LHC as well as both direct and indirect dark matter detection experiments. We examine the capabilities of these approaches in the 19-parameter phenomenological MSSM which provides a general framework for complementarity studies of neutralino dark matter. We summarize the sensitivity of dark matter searches at the 7 and 8 (and eventually 14) TeV LHC, combined with those by Fermi, CTA, IceCube/DeepCore, COUPP, LZ and XENON. The strengths and weaknesses of each of these techniques are examined and contrasted and their interdependent roles in covering the model parameter space are discussed in detail. We find that these approaches explore orthogonal territory and that advances in each are necessary to cover the supersymmetric weakly interacting massive particle parameter space. We also find that different experiments have widely varying sensitivities to the various dark matter annihilation mechanisms, some of which would be completely excluded by null results from these experiments.
Rapid Online Analysis of Local Feature Detectors and Their Complementarity
Ehsan, Shoaib; Clark, Adrian F.; McDonald-Maier, Klaus D.
2013-01-01
A vision system that can assess its own performance and take appropriate actions online to maximize its effectiveness would be a step towards achieving the long-cherished goal of imitating humans. This paper proposes a method for performing an online performance analysis of local feature detectors, the primary stage of many practical vision systems. It advocates the spatial distribution of local image features as a good performance indicator and presents a metric that can be calculated rapidly, concurs with human visual assessments and is complementary to existing offline measures such as repeatability. The metric is shown to provide a measure of complementarity for combinations of detectors, correctly reflecting the underlying principles of individual detectors. Qualitative results on well-established datasets for several state-of-the-art detectors are presented based on the proposed measure. Using a hypothesis testing approach and a newly-acquired, larger image database, statistically-significant performance differences are identified. Different detector pairs and triplets are examined quantitatively and the results provide a useful guideline for combining detectors in applications that require a reasonable spatial distribution of image features. A principled framework for combining feature detectors in these applications is also presented. Timing results reveal the potential of the metric for online applications. PMID:23966187
Non-Linear Control Allocation Using Piecewise Linear Functions
2003-08-01
A novel method is presented for the solution of the non- linear control allocation problem. Historically, control allocation has been performed by... linear control allocation problem to be cast as a piecewise linear program. The piecewise linear program is ultimately cast as a mixed-integer linear...piecewise linear control allocation method is shown to be markedly improved when compared to the performance of a more traditional control allocation approach that assumes linearity.
Complementarity and Area-Efficiency in the Prioritization of the Global Protected Area Network
Kullberg, Peter; Toivonen, Tuuli; Montesino Pouzols, Federico; Lehtomäki, Joona; Di Minin, Enrico; Moilanen, Atte
2015-01-01
Complementarity and cost-efficiency are widely used principles for protected area network design. Despite the wide use and robust theoretical underpinnings, their effects on the performance and patterns of priority areas are rarely studied in detail. Here we compare two approaches for identifying the management priority areas inside the global protected area network: 1) a scoring-based approach, used in recently published analysis and 2) a spatial prioritization method, which accounts for complementarity and area-efficiency. Using the same IUCN species distribution data the complementarity method found an equal-area set of priority areas with double the mean species ranges covered compared to the scoring-based approach. The complementarity set also had 72% more species with full ranges covered, and lacked any coverage only for half of the species compared to the scoring approach. Protected areas in our complementarity-based solution were on average smaller and geographically more scattered. The large difference between the two solutions highlights the need for critical thinking about the selected prioritization method. According to our analysis, accounting for complementarity and area-efficiency can lead to considerable improvements when setting management priorities for the global protected area network. PMID:26678497
NASA Astrophysics Data System (ADS)
Giovannacci, D.; Detalle, V.; Martos-Levif, D.; Ogien, J.; Bernikola, E.; Tornari, V.; Hatzigiannakis, K.; Mouhoubi, K.; Bodnar, J.-L.; Walker, G.-C.; Brissaud, D.; Trichereau, B.; Jackson, B.; Bowen, J.
2015-06-01
The abbey's church of Chaalis, in the North of Paris, was founded by Louis VI as a Cistercian monastery on 10th January 1137. In 2013, in the frame the European Commission's 7th Framework Program project CHARISMA [grant agreement no. 228330] the chapel was used as a practical case-study for application of the work done in a task devoted to best practices in historical buildings and monuments. In the chapel, three areas were identified as relevant. The first area was used to make an exercise on diagnosis of the different deterioration patterns. The second area was used to analyze a restored area. The third one was selected to test some hypotheses on the possibility of using the portable instruments to answer some questions related to the deterioration problems. To inspect this area, different tools were used: -Visible fluorescence under UV, - THz system, - Stimulated Infra-Red Thermography, SIRT - Digital Holographic Speckle Pattern Interferometry, DHSPI - Condition report by conservator-restorer. The complementarity and synergy offered by the profitable use of the different integrated tools is clearly shown in this practical exercise.
Accounting for complementarity to maximize monitoring power for species management.
Tulloch, Ayesha I T; Chadès, Iadine; Possingham, Hugh P
2013-10-01
To choose among conservation actions that may benefit many species, managers need to monitor the consequences of those actions. Decisions about which species to monitor from a suite of different species being managed are hindered by natural variability in populations and uncertainty in several factors: the ability of the monitoring to detect a change, the likelihood of the management action being successful for a species, and how representative species are of one another. However, the literature provides little guidance about how to account for these uncertainties when deciding which species to monitor to determine whether the management actions are delivering outcomes. We devised an approach that applies decision science and selects the best complementary suite of species to monitor to meet specific conservation objectives. We created an index for indicator selection that accounts for the likelihood of successfully detecting a real trend due to a management action and whether that signal provides information about other species. We illustrated the benefit of our approach by analyzing a monitoring program for invasive predator management aimed at recovering 14 native Australian mammals of conservation concern. Our method selected the species that provided more monitoring power at lower cost relative to the current strategy and traditional approaches that consider only a subset of the important considerations. Our benefit function accounted for natural variability in species growth rates, uncertainty in the responses of species to the prescribed action, and how well species represent others. Monitoring programs that ignore uncertainty, likelihood of detecting change, and complementarity between species will be more costly and less efficient and may waste funding that could otherwise be used for management.
Bee diversity effects on pollination depend on functional complementarity and niche shifts.
Fründ, Jochen; Dormann, Carsten F; Holzschuh, Andrea; Tscharntke, Teja
2013-09-01
Biodiversity is important for many ecosystem processes. Global declines in pollinator diversity and abundance have been recognized, raising concerns about a pollination crisis of crops and wild plants. However, experimental evidence for effects of pollinator species diversity on plant reproduction is extremely scarce. We established communities with 1-5 bee species to test how seed production of a plant community is determined by bee diversity. Higher bee diversity resulted in higher seed production, but the strongest difference was observed for one compared to more than one bee species. Functional complementarity among bee species had a far higher explanatory power than bee diversity, suggesting that additional bee species only benefit pollination when they increase coverage of functional niches. In our experiment, complementarity was driven by differences in flower and temperature preferences. Interspecific interactions among bee species contributed to realized functional complementarity, as bees reduced interspecific overlap by shifting to alternative flowers in the presence of other species. This increased the number of plant species visited by a bee community and demonstrates a new mechanism for a biodiversity-function relationship ("interactive complementarity"). In conclusion, our results highlight both the importance of bee functional diversity for the reproduction of plant communities and the need to identify complementarity traits for accurately predicting pollination services by different bee communities.
Wiedemann, H.
1981-11-01
Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.
Self-complementarity within proteins: bridging the gap between binding and folding.
Basu, Sankar; Bhattacharyya, Dhananjay; Banerjee, Rahul
2012-06-06
Complementarity, in terms of both shape and electrostatic potential, has been quantitatively estimated at protein-protein interfaces and used extensively to predict the specific geometry of association between interacting proteins. In this work, we attempted to place both binding and folding on a common conceptual platform based on complementarity. To that end, we estimated (for the first time to our knowledge) electrostatic complementarity (Em) for residues buried within proteins. Em measures the correlation of surface electrostatic potential at protein interiors. The results show fairly uniform and significant values for all amino acids. Interestingly, hydrophobic side chains also attain appreciable complementarity primarily due to the trajectory of the main chain. Previous work from our laboratory characterized the surface (or shape) complementarity (Sm) of interior residues, and both of these measures have now been combined to derive two scoring functions to identify the native fold amid a set of decoys. These scoring functions are somewhat similar to functions that discriminate among multiple solutions in a protein-protein docking exercise. The performances of both of these functions on state-of-the-art databases were comparable if not better than most currently available scoring functions. Thus, analogously to interfacial residues of protein chains associated (docked) with specific geometry, amino acids found in the native interior have to satisfy fairly stringent constraints in terms of both Sm and Em. The functions were also found to be useful for correctly identifying the same fold for two sequences with low sequence identity. Finally, inspired by the Ramachandran plot, we developed a plot of Sm versus Em (referred to as the complementarity plot) that identifies residues with suboptimal packing and electrostatics which appear to be correlated to coordinate errors.
A methodology to quantify and optimize time complementarity between hydropower and solar PV systems
NASA Astrophysics Data System (ADS)
Kougias, Ioannis; Szabó, Sándor; Monforti-Ferrario, Fabio; Huld, Thomas; Bódis, Katalin
2016-04-01
Hydropower and solar energy are expected to play a major role in achieving renewable energy sources' (RES) penetration targets. However, the integration of RES in the energy mix needs to overcome the technical challenges that are related to grid's operation. Therefore, there is an increasing need to explore approaches where different RES will operate under a synergetic approach. Ideally, hydropower and solar PV systems can be jointly developed in such systems where their electricity output profiles complement each other as much as possible and minimize the need for reserve capacities and storage costs. A straightforward way to achieve that is by optimizing the complementarity among RES systems both over time and spatially. The present research developed a methodology that quantifies the degree of time complementarity between small-scale hydropower stations and solar PV systems and examines ways to increase it. The methodology analyses high-resolution spatial and temporal data for solar radiation obtained from the existing PVGIS model (available online at: http://re.jrc.ec.europa.eu/pvgis/) and associates it with hydrological information of water inflows to a hydropower station. It builds on an exhaustive optimization algorithm that tests possible alterations of the PV system installation (azimuth, tilt) aiming to increase the complementarity, with minor compromises in the total solar energy output. The methodology has been tested in several case studies and the results indicated variations among regions and different hydraulic regimes. In some cases a small compromise in the solar energy output showed significant increases of the complementarity, while in other cases the effect is not that strong. Our contribution aims to present these findings in detail and initiate a discussion on the role and gains of increased complementarity between solar and hydropower energies. Reference: Kougias I, Szabó S, Monforti-Ferrario F, Huld T, Bódis K (2016). A methodology for
NASA Astrophysics Data System (ADS)
Carles, M.; Torres-Espallardo, I.; Alberich-Bayarri, A.; Olivas, C.; Bello, P.; Nestle, U.; Martí-Bonmatí, L.
2017-01-01
A major source of error in quantitative PET/CT scans of lung cancer tumors is respiratory motion. Regarding the variability of PET texture features (TF), the impact of respiratory motion has not been properly studied with experimental phantoms. The primary aim of this work was to evaluate the current use of PET texture analysis for heterogeneity characterization in lesions affected by respiratory motion. Twenty-eight heterogeneous lesions were simulated by a mixture of alginate and 18 F-fluoro-2-deoxy-D-glucose (FDG). Sixteen respiratory patterns were applied. Firstly, the TF response for different heterogeneous phantoms and its robustness with respect to the segmentation method were calculated. Secondly, the variability for TF derived from PET image with (gated, G-) and without (ungated, U-) motion compensation was analyzed. Finally, TF complementarity was assessed. In the comparison of TF derived from the ideal contour with respect to TF derived from 40%-threshold and adaptive-threshold PET contours, 7/8 TF showed strong linear correlation (LC) (p < 0.001, r > 0.75), despite a significant volume underestimation. Independence of lesion movement (LC in 100% of the combined pairs of movements, p < 0.05) was obtained for 1/8 TF with U-image (width of the volume-activity histogram, WH) and 4/8 TF with G-image (WH and energy (ENG), local-homogeneity (LH) and entropy (ENT), derived from the co-ocurrence matrix). Their variability in terms of the coefficient of variance ({{C}\\text{V}} ) resulted in {{C}\\text{V}} (WH) = 0.18 on the U-image and {{C}\\text{V}} (WH) = 0.24, {{C}\\text{V}} (ENG) = 0.15, {{C}\\text{V}} (LH) = 0.07 and {{C}\\text{V}} (ENT) = 0.06 on the G-image. Apart from WH (r > 0.9, p < 0.001), not one of these TF has shown LC with C max. Complementarity was observed for the TF pairs: ENG-LH, CONT (contrast)-ENT and LH-ENT. In conclusion, the effect of
Experimental investigation of halogen-bond hard-soft acid-base complementarity.
Riel, Asia Marie S; Jessop, Morly J; Decato, Daniel A; Massena, Casey J; Nascimento, Vinicius R; Berryman, Orion B
2017-04-01
The halogen bond (XB) is a topical noncovalent interaction of rapidly increasing importance. The XB employs a `soft' donor atom in comparison to the `hard' proton of the hydrogen bond (HB). This difference has led to the hypothesis that XBs can form more favorable interactions with `soft' bases than HBs. While computational studies have supported this suggestion, solution and solid-state data are lacking. Here, XB soft-soft complementarity is investigated with a bidentate receptor that shows similar associations with neutral carbonyls and heavy chalcogen analogs. The solution speciation and XB soft-soft complementarity is supported by four crystal structures containing neutral and anionic soft Lewis bases.
Metabolic Complementarity and Genomics of the Dual Bacterial Symbiosis of Sharpshooters
Wu, Dongying; Daugherty, Sean C; Van Aken, Susan E; Pai, Grace H; Watkins, Kisha L; Khouri, Hoda; Tallon, Luke J; Zaborsky, Jennifer M; Dunbar, Helen E; Tran, Phat L; Moran, Nancy A
2006-01-01
Mutualistic intracellular symbiosis between bacteria and insects is a widespread phenomenon that has contributed to the global success of insects. The symbionts, by provisioning nutrients lacking from diets, allow various insects to occupy or dominate ecological niches that might otherwise be unavailable. One such insect is the glassy-winged sharpshooter (Homalodisca coagulata), which feeds on xylem fluid, a diet exceptionally poor in organic nutrients. Phylogenetic studies based on rRNA have shown two types of bacterial symbionts to be coevolving with sharpshooters: the gamma-proteobacterium Baumannia cicadellinicola and the Bacteroidetes species Sulcia muelleri. We report here the sequencing and analysis of the 686,192–base pair genome of B. cicadellinicola and approximately 150 kilobase pairs of the small genome of S. muelleri, both isolated from H. coagulata. Our study, which to our knowledge is the first genomic analysis of an obligate symbiosis involving multiple partners, suggests striking complementarity in the biosynthetic capabilities of the two symbionts: B. cicadellinicola devotes a substantial portion of its genome to the biosynthesis of vitamins and cofactors required by animals and lacks most amino acid biosynthetic pathways, whereas S. muelleri apparently produces most or all of the essential amino acids needed by its host. This finding, along with other results of our genome analysis, suggests the existence of metabolic codependency among the two unrelated endosymbionts and their insect host. This dual symbiosis provides a model case for studying correlated genome evolution and genome reduction involving multiple organisms in an intimate, obligate mutualistic relationship. In addition, our analysis provides insight for the first time into the differences in symbionts between insects (e.g., aphids) that feed on phloem versus those like H. coagulata that feed on xylem. Finally, the genomes of these two symbionts provide potential targets for
Variationally consistent discretization schemes and numerical algorithms for contact problems
NASA Astrophysics Data System (ADS)
Wohlmuth, Barbara
We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of
Cundiff, Jenny M; Smith, Timothy W; Butner, Jonathan; Critchfield, Kenneth L; Nealey-Moore, Jill
2015-01-01
The principle of complementarity in interpersonal theory states that an actor's behavior tends to "pull, elicit, invite, or evoke" responses from interaction partners who are similar in affiliation (i.e., warmth vs. hostility) and opposite in control (i.e., dominance vs. submissiveness). Furthermore, complementary interactions are proposed to evoke less negative affect and promote greater relationship satisfaction. These predictions were examined in two studies of married couples. Results suggest that complementarity in affiliation describes a robust general pattern of marital interaction, but complementarity in control varies across contexts. Consistent with behavioral models of marital interaction, greater levels of affiliation and lower control by partners-not complementarity in affiliation or control-were associated with less anger and anxiety and greater relationship quality. Partners' levels of affiliation and control combined in ways other than complementarity-mostly additively, but sometimes synergistically-to predict negative affect and relationship satisfaction.
Is the firewall consistent? Gedanken experiments on black hole complementarity and firewall proposal
Hwang, Dong-il; Lee, Bum-Hoon; Yeom, Dong-han E-mail: bhl@sogang.ac.kr
2013-01-01
In this paper, we discuss the black hole complementarity and the firewall proposal at length. Black hole complementarity is inevitable if we assume the following five things: unitarity, entropy-area formula, existence of an information observer, semi-classical quantum field theory for an asymptotic observer, and the general relativity for an in-falling observer. However, large N rescaling and the AMPS argument show that black hole complementarity is inconsistent. To salvage the basic philosophy of the black hole complementarity, AMPS introduced a firewall around the horizon. According to large N rescaling, the firewall should be located close to the apparent horizon. We investigate the consistency of the firewall with the two critical conditions: the firewall should be near the time-like apparent horizon and it should not affect the future infinity. Concerning this, we have introduced a gravitational collapse with a false vacuum lump which can generate a spacetime structure with disconnected apparent horizons. This reveals a situation that there is a firewall outside of the event horizon, while the apparent horizon is absent. Therefore, the firewall, if it exists, not only does modify the general relativity for an in-falling observer, but also modify the semi-classical quantum field theory for an asymptotic observer.
Complementarity as a Program Evaluation Strategy: A Focus on Qualitative and Quantitative Methods.
ERIC Educational Resources Information Center
Lafleur, Clay
Use of complementarity as a deliberate and necessary program evaluation strategy is discussed. Quantitative and qualitative approaches are viewed as complementary and can be integrated into a single study. The synergy that results from using complementary methods in a single study seems to enhance understanding and interpretation. A review of the…
ERIC Educational Resources Information Center
O'Toole, John; Dunn, Julie
2008-01-01
This article reports the findings of a research project that saw researchers from interaction design and drama education come together with a group of eleven and twelve year olds to investigate the current and future complementarity of computers and live classroom drama. The project was part of a pilot feasibility study commissioned by the…
Revisiting the quark-lepton complementarity and triminimal parametrization of neutrino mixing matrix
Kang, Sin Kyu
2011-05-01
We examine how a parametrization of neutrino mixing matrix reflecting quark-lepton complementarity can be probed by considering phase-averaged oscillation probabilities, flavor composition of neutrino fluxes coming from atmospheric and astrophysical neutrinos and lepton flavor violating radiative decays. We discuss some distinct features of the parametrization by comparing the triminimal parametrization of perturbations to the tribimaximal neutrino mixing matrix.
Hernandez, Pauline; Picon-Cochard, Catherine
2016-01-01
Legume species promote productivity and increase the digestibility of herbage in grasslands. Considerable experimental data also indicate that communities with legumes produce more above-ground biomass than is expected from monocultures. While it has been attributed to N facilitation, evidence to identify the mechanisms involved is still lacking and the role of complementarity in soil water acquisition by vertical root differentiation remains unclear. We used a 20-months mesocosm experiment to investigate the effects of species richness (single species, two- and five-species mixtures) and functional diversity (presence of the legume Trifolium repens) on a set of traits related to light, N and water use and measured at community level. We found a positive effect of Trifolium presence and abundance on biomass production and complementarity effects in the two-species mixtures from the second year. In addition the community traits related to water and N acquisition and use (leaf area, N, water-use efficiency, and deep root growth) were higher in the presence of Trifolium. With a multiple regression approach, we showed that the traits related to water acquisition and use were with N the main determinants of biomass production and complementarity effects in diverse mixtures. At shallow soil layers, lower root mass of Trifolium and higher soil moisture should increase soil water availability for the associated grass species. Conversely at deep soil layer, higher root growth and lower soil moisture mirror soil resource use increase of mixtures. Altogether, these results highlight N facilitation but almost soil vertical differentiation and thus complementarity for water acquisition and use in mixtures with Trifolium. Contrary to grass-Trifolium mixtures, no significant over-yielding was measured for grass mixtures even those having complementary traits (short and shallow vs. tall and deep). Thus, vertical complementarity for soil resources uptake in mixtures was not only
Hernandez, Pauline; Picon-Cochard, Catherine
2016-01-01
Legume species promote productivity and increase the digestibility of herbage in grasslands. Considerable experimental data also indicate that communities with legumes produce more above-ground biomass than is expected from monocultures. While it has been attributed to N facilitation, evidence to identify the mechanisms involved is still lacking and the role of complementarity in soil water acquisition by vertical root differentiation remains unclear. We used a 20-months mesocosm experiment to investigate the effects of species richness (single species, two- and five-species mixtures) and functional diversity (presence of the legume Trifolium repens) on a set of traits related to light, N and water use and measured at community level. We found a positive effect of Trifolium presence and abundance on biomass production and complementarity effects in the two-species mixtures from the second year. In addition the community traits related to water and N acquisition and use (leaf area, N, water-use efficiency, and deep root growth) were higher in the presence of Trifolium. With a multiple regression approach, we showed that the traits related to water acquisition and use were with N the main determinants of biomass production and complementarity effects in diverse mixtures. At shallow soil layers, lower root mass of Trifolium and higher soil moisture should increase soil water availability for the associated grass species. Conversely at deep soil layer, higher root growth and lower soil moisture mirror soil resource use increase of mixtures. Altogether, these results highlight N facilitation but almost soil vertical differentiation and thus complementarity for water acquisition and use in mixtures with Trifolium. Contrary to grass-Trifolium mixtures, no significant over-yielding was measured for grass mixtures even those having complementary traits (short and shallow vs. tall and deep). Thus, vertical complementarity for soil resources uptake in mixtures was not only
Hydro-elastic complementarity in black branes at large D
NASA Astrophysics Data System (ADS)
Emparan, Roberto; Izumi, Keisuke; Luna, Raimon; Suzuki, Ryotaku; Tanabe, Kentaro
2016-06-01
We obtain the effective theory for the non-linear dynamics of black branes — both neutral and charged, in asymptotically flat or Anti-deSitter spacetimes — to leading order in the inverse-dimensional expansion. We find that black branes evolve as viscous fluids, but when they settle down they are more naturally viewed as solutions of an elastic soap-bubble theory. The two views are complementary: the same variable is regarded in one case as the energy density of the fluid, in the other as the deformation of the elastic membrane. The large- D theory captures finite-wavelength phenomena beyond the conventional reach of hydrodynamics. For asymptotically flat charged black branes (either Reissner-Nordstrom or p-brane-charged black branes) it yields the non-linear evolution of the Gregory-Laflamme instability at large D and its endpoint at stable non-uniform black branes. For Reissner-Nordstrom AdS black branes we find that sound perturbations do not propagate (have purely imaginary frequency) when their wavelength is below a certain charge-dependent value. We also study the polarization of black branes induced by an external electric field.
Linear quadratic optimal control for symmetric systems
NASA Technical Reports Server (NTRS)
Lewis, J. H.; Martin, C. F.
1983-01-01
Special symmetries are present in many control problems. This paper addresses the problem of determining linear-quadratic optimal control problems whose solutions preserve the symmetry of the initial linear control system.
Wave-particle dualism and complementarity unraveled by a different mode.
Menzel, Ralf; Puhlmann, Dirk; Heuer, Axel; Schleich, Wolfgang P
2012-06-12
The precise knowledge of one of two complementary experimental outcomes prevents us from obtaining complete information about the other one. This formulation of Niels Bohr's principle of complementarity when applied to the paradigm of wave-particle dualism--that is, to Young's double-slit experiment--implies that the information about the slit through which a quantum particle has passed erases interference. In the present paper we report a double-slit experiment using two photons created by spontaneous parametric down-conversion where we observe interference in the signal photon despite the fact that we have located it in one of the slits due to its entanglement with the idler photon. This surprising aspect of complementarity comes to light by our special choice of the TEM(01) pump mode. According to quantum field theory the signal photon is then in a coherent superposition of two distinct wave vectors giving rise to interference fringes analogous to two mechanical slits.
Complementarity of PALM and SOFI for super-resolution live-cell imaging of focal adhesions
NASA Astrophysics Data System (ADS)
Deschout, Hendrik; Lukes, Tomas; Sharipov, Azat; Szlag, Daniel; Feletti, Lely; Vandenberg, Wim; Dedecker, Peter; Hofkens, Johan; Leutenegger, Marcel; Lasser, Theo; Radenovic, Aleksandra
2016-12-01
Live-cell imaging of focal adhesions requires a sufficiently high temporal resolution, which remains a challenge for super-resolution microscopy. Here we address this important issue by combining photoactivated localization microscopy (PALM) with super-resolution optical fluctuation imaging (SOFI). Using simulations and fixed-cell focal adhesion images, we investigate the complementarity between PALM and SOFI in terms of spatial and temporal resolution. This PALM-SOFI framework is used to image focal adhesions in living cells, while obtaining a temporal resolution below 10 s. We visualize the dynamics of focal adhesions, and reveal local mean velocities around 190 nm min-1. The complementarity of PALM and SOFI is assessed in detail with a methodology that integrates a resolution and signal-to-noise metric. This PALM and SOFI concept provides an enlarged quantitative imaging framework, allowing unprecedented functional exploration of focal adhesions through the estimation of molecular parameters such as fluorophore densities and photoactivation or photoswitching kinetics.
Plant diversity increases spatio-temporal niche complementarity in plant-pollinator interactions.
Venjakob, Christine; Klein, Alexandra-Maria; Ebeling, Anne; Tscharntke, Teja; Scherber, Christoph
2016-04-01
Ongoing biodiversity decline impairs ecosystem processes, including pollination. Flower visitation, an important indicator of pollination services, is influenced by plant species richness. However, the spatio-temporal responses of different pollinator groups to plant species richness have not yet been analyzed experimentally. Here, we used an experimental plant species richness gradient to analyze plant-pollinator interactions with an unprecedented spatio-temporal resolution. We observed four pollinator functional groups (honeybees, bumblebees, solitary bees, and hoverflies) in experimental plots at three different vegetation strata between sunrise and sunset. Visits were modified by plant species richness interacting with time and space. Furthermore, the complementarity of pollinator functional groups in space and time was stronger in species-rich mixtures. We conclude that high plant diversity should ensure stable pollination services, mediated via spatio-temporal niche complementarity in flower visitation.
Todres, L; Wheeler, S
2001-02-01
The focus of this paper draws on the thinking of Husserl, Dilthey and Heidegger to identify elements of the phenomenological movement that can provide focus and direction for qualitative research in nursing. The authors interpret this tradition in two ways: emphasizing the possible complementarity of phenomenology, hermeneutics and existentialism, and demonstrating how these emphases ask for grounding, reflexivity and humanization in qualitative research. The paper shows that the themes of grounding, reflexivity and humanization are particularly important for nursing research.
Kraut, Daniel A; Sigala, Paul A; Pybus, Brandon; Liu, Corey W; Ringe, Dagmar; Petsko, Gregory A; Herschlag, Daniel
2006-04-01
A longstanding proposal in enzymology is that enzymes are electrostatically and geometrically complementary to the transition states of the reactions they catalyze and that this complementarity contributes to catalysis. Experimental evaluation of this contribution, however, has been difficult. We have systematically dissected the potential contribution to catalysis from electrostatic complementarity in ketosteroid isomerase. Phenolates, analogs of the transition state and reaction intermediate, bind and accept two hydrogen bonds in an active site oxyanion hole. The binding of substituted phenolates of constant molecular shape but increasing pK(a) models the charge accumulation in the oxyanion hole during the enzymatic reaction. As charge localization increases, the NMR chemical shifts of protons involved in oxyanion hole hydrogen bonds increase by 0.50-0.76 ppm/pK(a) unit, suggesting a bond shortening of 0.02 A/pK(a) unit. Nevertheless, there is little change in binding affinity across a series of substituted phenolates (DeltaDeltaG = -0.2 kcal/mol/pK(a) unit). The small effect of increased charge localization on affinity occurs despite the shortening of the hydrogen bonds and a large favorable change in binding enthalpy (DeltaDeltaH = -2.0 kcal/mol/pK(a) unit). This shallow dependence of binding affinity suggests that electrostatic complementarity in the oxyanion hole makes at most a modest contribution to catalysis of 300-fold. We propose that geometrical complementarity between the oxyanion hole hydrogen-bond donors and the transition state oxyanion provides a significant catalytic contribution, and suggest that KSI, like other enzymes, achieves its catalytic prowess through a combination of modest contributions from several mechanisms rather than from a single dominant contribution.
Brown, Marion B; Schlacher, Thomas A; Schoeman, David S; Weston, Michael A; Huijbers, Chantal M; Olds, Andrew D; Connolly, Rod M
2015-10-01
Species composition is expected to alter ecological function in assemblages if species traits differ strongly. Such effects are often large and persistent for nonnative carnivores invading islands. Alternatively, high similarity in traits within assemblages creates a degree of functional redundancy in ecosystems. Here we tested whether species turnover results in functional ecological equivalence or complementarity, and whether invasive carnivores on islands significantly alter such ecological function. The model system consisted of vertebrate scavengers (dominated by raptors) foraging on animal carcasses on ocean beaches on two Australian islands, one with and one without invasive red foxes (Vulpes vulpes). Partitioning of scavenging events among species, carcass removal rates, and detection speeds were quantified using camera traps baited with fish carcasses at the dune-beach interface. Complete segregation of temporal foraging niches between mammals (nocturnal) and birds (diurnal) reflects complementarity in carrion utilization. Conversely, functional redundancy exists within the bird guild where several species of raptors dominate carrion removal in a broadly similar way. As predicted, effects of red foxes were large. They substantially changed the nature and rate of the scavenging process in the system: (1) foxes consumed over half (55%) of all carrion available at night, compared with negligible mammalian foraging at night on the fox-free island, and (2) significant shifts in the composition of the scavenger assemblages consuming beach-cast carrion are the consequence of fox invasion at one island. Arguably, in the absence of other mammalian apex predators, the addition of red foxes creates a new dimension of functional complementarity in beach food webs. However, this functional complementarity added by foxes is neither benign nor neutral, as marine carrion subsidies to coastal red fox populations are likely to facilitate their persistence as exotic
Complementarity of resonant and nonresonant strong WW scattering at SSC and LHC
Chanowitz, M.S.
1992-08-01
Signals and backgrounds for strong WW scattering at the SSC and LHC are considered. Complementarity of resonant signals in the I = 1 WZ channel and nonresonant signals in the I = 2 W{sup +}W{sup +} channel is illustrated using a chiral lagrangian with a J = 1 ``p`` resonance. Results are presented for purely leptonic final states in the W{plus_minus}Z, W{sup +}W{sup +} + W{sup {minus}}W{sup {minus}}, and ZZ channels.
Complementarity of resonant and nonresonant strong WW scattering at SSC and LHC
Chanowitz, M.S.
1992-08-01
Signals and backgrounds for strong WW scattering at the SSC and LHC are considered. Complementarity of resonant signals in the I = 1 WZ channel and nonresonant signals in the I = 2 W[sup +]W[sup +] channel is illustrated using a chiral lagrangian with a J = 1 p'' resonance. Results are presented for purely leptonic final states in the W[plus minus]Z, W[sup +]W[sup +] + W[sup [minus
Jensen, Peter D; Zhang, Yuanji; Wiggins, B Elizabeth; Petrick, Jay S; Zhu, Jin; Kerstetter, Randall A; Heck, Gregory R; Ivashuta, Sergey I
2013-01-01
Long double-stranded RNAs (long dsRNAs) are precursors for the effector molecules of sequence-specific RNA-based gene silencing in eukaryotes. Plant cells can contain numerous endogenous long dsRNAs. This study demonstrates that such endogenous long dsRNAs in plants have sequence complementarity to human genes. Many of these complementary long dsRNAs have perfect sequence complementarity of at least 21 nucleotides to human genes; enough complementarity to potentially trigger gene silencing in targeted human cells if delivered in functional form. However, the number and diversity of long dsRNA molecules in plant tissue from crops such as lettuce, tomato, corn, soy and rice with complementarity to human genes that have a long history of safe consumption supports a conclusion that long dsRNAs do not present a significant dietary risk.
Trophic complementarity drives the biodiversity-ecosystem functioning relationship in food webs.
Poisot, Timothée; Mouquet, Nicolas; Gravel, Dominique
2013-07-01
The biodiversity-ecosystem functioning (BEF) relationship is central in community ecology. Its drivers in competitive systems (sampling effect and functional complementarity) are intuitive and elegant, but we lack an integrative understanding of these drivers in complex ecosystems. Because networks encompass two key components of the BEF relationship (species richness and biomass flow), they provide a key to identify these drivers, assuming that we have a meaningful measure of functional complementarity. In a network, diversity can be defined by species richness, the number of trophic levels, but perhaps more importantly, the diversity of interactions. In this paper, we define the concept of trophic complementarity (TC), which emerges through exploitative and apparent competition processes, and study its contribution to ecosystem functioning. Using a model of trophic community dynamics, we show that TC predicts various measures of ecosystem functioning, and generate a range of testable predictions. We find that, in addition to the number of species, the structure of their interactions needs to be accounted for to predict ecosystem productivity.
Complementarity in mineral nitrogen use among dominant plant species in a subalpine community.
Pornon, André; Escaravage, Nathalie; Lamaze, Thierry
2007-11-01
The underlying mechanisms that enable plant species to coexist are poorly understood. Complementarity in resource use is among the major mechanisms proposed that could favor species coexistence but is insufficiently documented. In alpine soil, low temperatures are a major constraint for the supply of plant nitrogen. We carried out (15)N labeling of soil mineral N to determine to what extent four major species of a subalpine community compete for N, or develop ionic (NH(4)(+) vs. NO(3)(-)) or temporal complementarity. The Poaceae took up much more (15)N per soil area unit than the ericaceous species, and all species displayed three major strategies in exploiting (15)N: (1) uptake mainly early in the growing season (Vaccinium myrtillus), (2) uptake at a slow and similar rate throughout the growing season (Rhododendron ferrugineum), and (3) uptake at high rates over the growing season (Festuca eskia and Nardus stricta). However, while F. eskia used (15)NH(4)(+) mainly early and (15)NO(3)(-) mainly late in the growing season, the reverse was observed for N. stricta. Taking into account (15)N dilution in soil NH(4)(+) and NO(3)(-) pools, we calculated that NH(4)(+) provided more than 80% of the mineral N uptake in Ericaceae and about 60% in grasses. Together, such ionic and temporal complementarity would reduce competition between species and could be a major mechanism promoting species diversity.
Climate Change Mitigation and Adaptation in the Land Use Sector: From Complementarity to Synergy
NASA Astrophysics Data System (ADS)
Duguma, Lalisa A.; Minang, Peter A.; van Noordwijk, Meine
2014-09-01
Currently, mitigation and adaptation measures are handled separately, due to differences in priorities for the measures and segregated planning and implementation policies at international and national levels. There is a growing argument that synergistic approaches to adaptation and mitigation could bring substantial benefits at multiple scales in the land use sector. Nonetheless, efforts to implement synergies between adaptation and mitigation measures are rare due to the weak conceptual framing of the approach and constraining policy issues. In this paper, we explore the attributes of synergy and the necessary enabling conditions and discuss, as an example, experience with the Ngitili system in Tanzania that serves both adaptation and mitigation functions. An in-depth look into the current practices suggests that more emphasis is laid on complementarity—i.e., mitigation projects providing adaptation co-benefits and vice versa rather than on synergy. Unlike complementarity, synergy should emphasize functionally sustainable landscape systems in which adaptation and mitigation are optimized as part of multiple functions. We argue that the current practice of seeking co-benefits (complementarity) is a necessary but insufficient step toward addressing synergy. Moving forward from complementarity will require a paradigm shift from current compartmentalization between mitigation and adaptation to systems thinking at landscape scale. However, enabling policy, institutional, and investment conditions need to be developed at global, national, and local levels to achieve synergistic goals.
Caliman, Adriano; Carneiro, Luciana S.; Leal, João J. F.; Farjalla, Vinicius F.; Bozelli, Reinaldo L.; Esteves, Francisco A.
2012-01-01
Tests of the biodiversity and ecosystem functioning (BEF) relationship have focused little attention on the importance of interactions between species diversity and other attributes of ecological communities such as community biomass. Moreover, BEF research has been mainly derived from studies measuring a single ecosystem process that often represents resource consumption within a given habitat. Focus on single processes has prevented us from exploring the characteristics of ecosystem processes that can be critical in helping us to identify how novel pathways throughout BEF mechanisms may operate. Here, we investigated whether and how the effects of biodiversity mediated by non-trophic interactions among benthic bioturbator species vary according to community biomass and ecosystem processes. We hypothesized that (1) bioturbator biomass and species richness interact to affect the rates of benthic nutrient regeneration [dissolved inorganic nitrogen (DIN) and total dissolved phosphorus (TDP)] and consequently bacterioplankton production (BP) and that (2) the complementarity effects of diversity will be stronger on BP than on nutrient regeneration because the former represents a more integrative process that can be mediated by multivariate nutrient complementarity. We show that the effects of bioturbator diversity on nutrient regeneration increased BP via multivariate nutrient complementarity. Consistent with our prediction, the complementarity effects were significantly stronger on BP than on DIN and TDP. The effects of the biomass-species richness interaction on complementarity varied among the individual processes, but the aggregated measures of complementarity over all ecosystem processes were significantly higher at the highest community biomass level. Our results suggest that the complementarity effects of biodiversity can be stronger on more integrative ecosystem processes, which integrate subsidiary “simpler” processes, via multivariate complementarity. In
Complementarity of Historic Building Information Modelling and Geographic Information Systems
NASA Astrophysics Data System (ADS)
Yang, X.; Koehl, M.; Grussenmeyer, P.; Macher, H.
2016-06-01
In this paper, we discuss the potential of integrating both semantically rich models from Building Information Modelling (BIM) and Geographical Information Systems (GIS) to build the detailed 3D historic model. BIM contributes to the creation of a digital representation having all physical and functional building characteristics in several dimensions, as e.g. XYZ (3D), time and non-architectural information that are necessary for construction and management of buildings. GIS has potential in handling and managing spatial data especially exploring spatial relationships and is widely used in urban modelling. However, when considering heritage modelling, the specificity of irregular historical components makes it problematic to create the enriched model according to its complex architectural elements obtained from point clouds. Therefore, some open issues limiting the historic building 3D modelling will be discussed in this paper: how to deal with the complex elements composing historic buildings in BIM and GIS environment, how to build the enriched historic model, and why to construct different levels of details? By solving these problems, conceptualization, documentation and analysis of enriched Historic Building Information Modelling are developed and compared to traditional 3D models aimed primarily for visualization.
NASA Astrophysics Data System (ADS)
Sidorin, Anatoly
2010-01-01
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
On the linear programming bound for linear Lee codes.
Astola, Helena; Tabus, Ioan
2016-01-01
Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.
Degtyarev, S P
2008-04-30
Instantaneous support shrinking is studied for a doubly non-linear degenerate parabolic equation in the case of slow diffusion when, in general, the Cauchy initial data are Radon measures. For a non-negative solution, a necessary and sufficient condition for instantaneous support shrinking is obtained in terms of the local behaviour of the mass of the initial data. In the same terms, estimates are obtained for the size of the support, that are sharp with respect to order. Bibliography: 24 titles.
NASA Astrophysics Data System (ADS)
Joglekar, D. M.; Mitra, M.
2015-12-01
The present investigation outlines a method based on the wavelet transform to analyze the vibration response of discrete piecewise linear oscillators, representative of beams with breathing cracks. The displacement and force variables in the governing differential equation are approximated using Daubechies compactly supported wavelets. An iterative scheme is developed to arrive at the optimum transform coefficients, which are back-transformed to obtain the time-domain response. A time-integration scheme, solving a linear complementarity problem at every time step, is devised to validate the proposed wavelet-based method. Applicability of the proposed solution technique is demonstrated by considering several test cases involving a cracked cantilever beam modeled as a bilinear SDOF system subjected to a harmonic excitation. In particular, the presence of higher-order harmonics, originating from the piecewise linear behavior, is confirmed in all the test cases. Parametric study involving the variations in the crack depth, and crack location is performed to bring out their effect on the relative strengths of higher-order harmonics. Versatility of the method is demonstrated by considering the cases such as mixed-frequency excitation and an MDOF oscillator with multiple bilinear springs. In addition to purporting the wavelet-based method as a viable alternative to analyze the response of piecewise linear oscillators, the proposed method can be easily extended to solve inverse problems unlike the other direct time integration schemes.
NASA Astrophysics Data System (ADS)
Sahin, Mehmet; Owens, Robert G.
2003-05-01
A novel finite volume method, described in Part I of this paper (Sahin and Owens, Int. J. Numer. Meth. Fluids 2003; 42:57-77), is applied in the linear stability analysis of a lid-driven cavity flow in a square enclosure. A combination of Arnoldi's method and extrapolation to zero mesh size allows us to determine the first critical Reynolds number at which Hopf bifurcation takes place. The extreme sensitivity of the predicted critical Reynolds number to the accuracy of the method and to the treatment of the singularity points is noted. Results are compared with those in the literature and are in very good agreement.
A Structural Connection between Linear and 0-1 Integer Linear Formulations
ERIC Educational Resources Information Center
Adlakha, V.; Kowalski, K.
2007-01-01
The connection between linear and 0-1 integer linear formulations has attracted the attention of many researchers. The main reason triggering this interest has been an availability of efficient computer programs for solving pure linear problems including the transportation problem. Also the optimality of linear problems is easily verifiable…
Segregation by Complementarity of nanoDNA based on Liquid Crystal Ordering and Centrifugation
NASA Astrophysics Data System (ADS)
Smith, Gregory; Tsai, Ethan; Robins, T.; Khodaghulyan, Armond; Zanchetta, Giuliano; Fraccia, Tommaso; Bellini, Tommaso; Walba, David; Clark, Noel
2012-02-01
Nanometer length DNA segments ( <20 base pair long) that are complementary can duplex and condense to make liquid crystal phases at concentrations >˜500 mg/mL This nanoDNA duplexing combined with order-disorder phase separation offers a means of sequestering molecules in mixtures of different DNA sequences based on their degree of complementarity. Here we show that isotropic and liquid crystalline phases, comprising respectively single strands and duplexes in multi-component nanoDNA solutions, can be physically separated by liquid crystal condensation followed by centrifugation.
On mean value iterations with application to variational inequality problems
Yao, Jen-Chih.
1989-12-01
In this report, we show that in a Hilbert space, a mean value iterative process generated by a continuous quasi-nonexpansive mapping always converges to a fixed point of the mapping without any precondition. We then employ this result to obtain approximating solutions to the variational inequality and the generalized complementarity problems. 7 refs.
1984-04-01
sd*l it (S.AX) 31 Stfm d fe VOO 2em. Twied7y. Os Md tdliM WaaIn~psla. (X y)m - 22 Y isWI .3. to solve the qpa fy-disctized two- and three-dimensional...to compute F,(JP) an each tera- tio. For larg problems, the evaluation of the Jacobian May be very ezpensiv, and, 2 A fe .. I i f4d. -eadm l 008m p et...301" Of*r 6 Set pe -,f&g voM 1-0 STW 1 UNTI couverpunce DO Solveft -Ape. dk4 - ( Fe -Ie141 l p -0 f5 14p EmD FOR 11pm 3.1.h The Pteomuditiomed Coujuget
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
NASA Astrophysics Data System (ADS)
Ramirez Camargo, L.; Zink, R.; Dorner, W.
2015-07-01
Spatial assessments of the potential of renewable energy sources (RES) have become a valuable information basis for policy and decision-making. These studies, however, do not explicitly consider the variability in time of RES such as solar energy or wind. Until now, the focus is usually given to economic profitability based on yearly balances, which do not allow a comprehensive examination of RES-technologies complementarity. Incrementing temporal resolution of energy output estimation will permit to plan the aggregation of a diverse pool of RES plants i.e., to conceive a system as a virtual power plant (VPP). This paper presents a spatiotemporal analysis methodology to estimate RES potential of municipalities. The methodology relies on a combination of open source geographic information systems (GIS) processing tools and the in-memory array processing environment of Python and NumPy. Beyond the typical identification of suitable locations to build power plants, it is possible to define which of them are the best for a balanced local energy supply. A case study of a municipality, using spatial data with one square meter resolution and one hour temporal resolution, shows strong complementarity of photovoltaic and wind power. Furthermore, it is shown that a detailed deployment strategy of potential suitable locations for RES, calculated with modest computational requirements, can support municipalities to develop VPPs and improve security of supply.
Complementarity of PALM and SOFI for super-resolution live-cell imaging of focal adhesions
Deschout, Hendrik; Lukes, Tomas; Sharipov, Azat; Szlag, Daniel; Feletti, Lely; Vandenberg, Wim; Dedecker, Peter; Hofkens, Johan; Leutenegger, Marcel; Lasser, Theo; Radenovic, Aleksandra
2016-01-01
Live-cell imaging of focal adhesions requires a sufficiently high temporal resolution, which remains a challenge for super-resolution microscopy. Here we address this important issue by combining photoactivated localization microscopy (PALM) with super-resolution optical fluctuation imaging (SOFI). Using simulations and fixed-cell focal adhesion images, we investigate the complementarity between PALM and SOFI in terms of spatial and temporal resolution. This PALM-SOFI framework is used to image focal adhesions in living cells, while obtaining a temporal resolution below 10 s. We visualize the dynamics of focal adhesions, and reveal local mean velocities around 190 nm min−1. The complementarity of PALM and SOFI is assessed in detail with a methodology that integrates a resolution and signal-to-noise metric. This PALM and SOFI concept provides an enlarged quantitative imaging framework, allowing unprecedented functional exploration of focal adhesions through the estimation of molecular parameters such as fluorophore densities and photoactivation or photoswitching kinetics. PMID:27991512
Varghese, Sunil; Scott, Richard E
2004-01-01
Developing countries are exploring the role of telehealth to overcome the challenges of providing adequate health care services. However, this process faces disparities, and no complementarity in telehealth policy development. Telehealth has the potential to transcend geopolitical boundaries, yet telehealth policy developed in one jurisdiction may hamper applications in another. Understanding such policy complexities is essential for telehealth to realize its full global potential. This study investigated 12 East Asian countries that may represent a microcosm of the world, to determine if the telehealth policy response of countries could be categorized, and whether any implications could be identified for the development of complementary telehealth policy. The countries were Cambodia, China, Hong Kong, Indonesia, Japan, Malaysia, Myanmar, Singapore, South Korea, Taiwan, Thailand, and Vietnam. Three categories of country response were identified in regard to national policy support and development. The first category was "None" (Cambodia, Myanmar, and Vietnam) where international partners, driven by humanitarian concerns, lead telehealth activity. The second category was "Proactive" (China, Indonesia, Malaysia, Singapore, South Korea, Taiwan, and Thailand) where national policies were designed with the view that telehealth initiatives are a component of larger development objectives. The third was "Reactive" (Hong Kong and Japan), where policies were only proffered after telehealth activities were sustainable. It is concluded that although complementarity of telehealth policy development is not occurring, increased interjurisdictional telehealth activity, regional clusters, and concerted and coordinated effort amongst researchers, practitioners, and policy makers may alter this trend.
Emergence of complementarity and the Baconian roots of Niels Bohr's method
NASA Astrophysics Data System (ADS)
Perovic, Slobodan
2013-08-01
I argue that instead of a rather narrow focus on N. Bohr's account of complementarity as a particular and perhaps obscure metaphysical or epistemological concept (or as being motivated by such a concept), we should consider it to result from pursuing a particular method of studying physical phenomena. More precisely, I identify a strong undercurrent of Baconian method of induction in Bohr's work that likely emerged during his experimental training and practice. When its development is analyzed in light of Baconian induction, complementarity emerges as a levelheaded rather than a controversial account, carefully elicited from a comprehensive grasp of the available experimental basis, shunning hasty metaphysically motivated generalizations based on partial experimental evidence. In fact, Bohr's insistence on the "classical" nature of observations in experiments, as well as the counterintuitive synthesis of wave and particle concepts that have puzzled scholars, seem a natural outcome (an updated instance) of the inductive method. Such analysis clarifies the intricacies of early Schrödinger's critique of the account as well as Bohr's response, which have been misinterpreted in the literature. If adequate, the analysis may lend considerable support to the view that Bacon explicated the general terms of an experimentally minded strand of the scientific method, developed and refined by scientists in the following three centuries.
Open-quantum-systems approach to complementarity in neutral-kaon interferometry
NASA Astrophysics Data System (ADS)
de Souza, Gustavo; de Oliveira, J. G. G.; Varizi, Adalberto D.; Nogueira, Edson C.; Sampaio, Marcos D.
2016-12-01
In bipartite quantum systems, entanglement correlations between the parties exerts direct influence in the phenomenon of wave-particle duality. This effect has been quantitatively analyzed in the context of two qubits by Jakob and Bergou [Opt. Commun. 283, 827 (2010), 10.1016/j.optcom.2009.10.044]. Employing a description of the K -meson propagation in free space where its weak decay states are included as a second party, we study here this effect in the kaon-antikaon oscillations. We show that a new quantitative "triality" relation holds, similar to the one considered by Jakob and Bergou. In our case, it relates the distinguishability between the decay-product states corresponding to the distinct kaon propagation modes KS, KL, the amount of wave-like path interference between these states, and the amount of entanglement given by the reduced von Neumann entropy. The inequality can account for the complementarity between strangeness oscillations and lifetime information previously considered in the literature, therefore allowing one to see how it is affected by entanglement correlations. As we will discuss, it allows one to visualize clearly through the K0-K ¯0 oscillations the fundamental role of entanglement in quantum complementarity.
Feeding complementarity versus redundancy among herbivorous fishes on a Caribbean reef
NASA Astrophysics Data System (ADS)
Burkepile, D. E.; Hay, M. E.
2011-06-01
Herbivory is an important driver of community structure on coral reefs. Adequate understanding of herbivory will mandate better knowledge of how specific herbivores impact reef communities and the redundancy versus complementarity of their ecological roles. We used algal communities generated by herbivore manipulations to assess such roles among Caribbean herbivorous fishes. We created large enclosures on a 16- to 18-m-deep reef to create treatments grazed for 10 months by: (1) only Sparisoma aurofrenatum, (2) only Acanthurus bahianus, (3) no large herbivorous fishes, or (4) natural densities of all reef fishes. After 10 months, we removed cages and filmed how free-ranging reef fishes fed among these treatments that differed in algal community structure. In general, Acanthurus spp. and Scarus spp. rapidly grazed exclosure and Sparisoma-only treatments, while Sparisoma spp. preferentially grazed exclosure and Acanthurus-only treatments. These patterns suggest complementarity between Sparisoma spp. and both Acanthurus spp. and Scarus spp. but redundancy between Acanthurus spp. and Scarus spp. Despite these generalities, there was also within-genera variance in response to the different treatments. For example, large Scarus spp., such as Scarus guacamaia, fed more similarly to Sparisoma spp., particularly Sparisoma viride, than to other Scarus spp. Moreover, the three common Sparisoma species differed considerably in the macroalgae to which they exhibited positive or negative relationships. Thus, herbivorous reef fishes vary considerably in their response to different algal communities and exhibit complex patterns of compensatory feeding and functional redundancy that are poorly predicted by taxonomy alone.
What is complementarity?: Niels Bohr and the architecture of quantum theory
NASA Astrophysics Data System (ADS)
Plotnitsky, Arkady
2014-12-01
This article explores Bohr’s argument, advanced under the heading of ‘complementarity,’ concerning quantum phenomena and quantum mechanics, and its physical and philosophical implications. In Bohr, the term complementarity designates both a particular concept and an overall interpretation of quantum phenomena and quantum mechanics, in part grounded in this concept. While the argument of this article is primarily philosophical, it will also address, historically, the development and transformations of Bohr’s thinking, under the impact of the development of quantum theory and Bohr’s confrontation with Einstein, especially their exchange concerning the EPR experiment, proposed by Einstein, Podolsky and Rosen in 1935. Bohr’s interpretation was progressively characterized by a more radical epistemology, in its ultimate form, which was developed in the 1930s and with which I shall be especially concerned here, defined by his new concepts of phenomenon and atomicity. According to this epistemology, quantum objects are seen as indescribable and possibly even as inconceivable, and as manifesting their existence only in the effects of their interactions with measuring instruments upon those instruments, effects that define phenomena in Bohr’s sense. The absence of causality is an automatic consequence of this epistemology. I shall also consider how probability and statistics work under these epistemological conditions.
Wave-particle dualism and complementarity unraveled by a different mode
Menzel, Ralf; Puhlmann, Dirk; Heuer, Axel; Schleich, Wolfgang P.
2012-01-01
The precise knowledge of one of two complementary experimental outcomes prevents us from obtaining complete information about the other one. This formulation of Niels Bohr’s principle of complementarity when applied to the paradigm of wave-particle dualism—that is, to Young’s double-slit experiment—implies that the information about the slit through which a quantum particle has passed erases interference. In the present paper we report a double-slit experiment using two photons created by spontaneous parametric down-conversion where we observe interference in the signal photon despite the fact that we have located it in one of the slits due to its entanglement with the idler photon. This surprising aspect of complementarity comes to light by our special choice of the TEM01 pump mode. According to quantum field theory the signal photon is then in a coherent superposition of two distinct wave vectors giving rise to interference fringes analogous to two mechanical slits. PMID:22628561
Singh, Vaibhav; Stoop, Marcel P; Stingl, Christoph; Luitwieler, Ronald L; Dekker, Lennard J; van Duijn, Martijn M; Kreft, Karim L; Luider, Theo M; Hintzen, Rogier Q
2013-12-01
B lymphocytes play a pivotal role in multiple sclerosis pathology, possibly via both antibody-dependent and -independent pathways. Intrathecal immunoglobulin G in multiple sclerosis is produced by clonally expanded B-cell populations. Recent studies indicate that the complementarity determining regions of immunoglobulins specific for certain antigens are frequently shared between different individuals. In this study, our main objective was to identify specific proteomic profiles of mutated complementarity determining regions of immunoglobulin G present in multiple sclerosis patients but absent in healthy controls. To achieve this objective, we purified immunoglobulin G from the cerebrospinal fluid of 29 multiple sclerosis patients and 30 healthy controls and separated the corresponding heavy and light chains via SDS-PAGE. Subsequently, bands were excised, trypsinized, and measured with high-resolution mass spectrometry. We sequenced 841 heavy and 771 light chain variable region peptides. We observed 24 heavy and 26 light chain complementarity determining regions that were solely present in a number of multiple sclerosis patients. Using stringent criteria for the identification of common peptides, we found five complementarity determining regions shared in three or more patients and not in controls. Interestingly, one complementarity determining region with a single mutation was found in six patients. Additionally, one other patient carrying a similar complementarity determining region with another mutation was observed. In addition, we found a skew in the κ-to-λ ratio and in the usage of certain variable heavy regions that was previously observed at the transcriptome level. At the protein level, cerebrospinal fluid immunoglobulin G shares common characteristics in the antigen binding region among different multiple sclerosis patients. The indication of a shared fingerprint may indicate common antigens for B-cell activation.
Norris, Vic; Root-Bernstein, Robert
2009-01-01
In the “ecosystems-first” approach to the origins of life, networks of non-covalent assemblies of molecules (composomes), rather than individual protocells, evolved under the constraints of molecular complementarity. Composomes evolved into the hyperstructures of modern bacteria. We extend the ecosystems-first approach to explain the origin of eukaryotic cells through the integration of mixed populations of bacteria. We suggest that mutualism and symbiosis resulted in cellular mergers entailing the loss of redundant hyperstructures, the uncoupling of transcription and translation, and the emergence of introns and multiple chromosomes. Molecular complementarity also facilitated integration of bacterial hyperstructures to perform cytoskeletal and movement functions. PMID:19582221
Norris, Vic; Root-Bernstein, Robert
2009-06-04
In the "ecosystems-first" approach to the origins of life, networks of non-covalent assemblies of molecules (composomes), rather than individual protocells, evolved under the constraints of molecular complementarity. Composomes evolved into the hyperstructures of modern bacteria. We extend the ecosystems-first approach to explain the origin of eukaryotic cells through the integration of mixed populations of bacteria. We suggest that mutualism and symbiosis resulted in cellular mergers entailing the loss of redundant hyperstructures, the uncoupling of transcription and translation, and the emergence of introns and multiple chromosomes. Molecular complementarity also facilitated integration of bacterial hyperstructures to perform cytoskeletal and movement functions.
Sparse linear programming subprogram
Hanson, R.J.; Hiebert, K.L.
1981-12-01
This report describes a subprogram, SPLP(), for solving linear programming problems. The package of subprogram units comprising SPLP() is written in Fortran 77. The subprogram SPLP() is intended for problems involving at most a few thousand constraints and variables. The subprograms are written to take advantage of sparsity in the constraint matrix. A very general problem statement is accepted by SPLP(). It allows upper, lower, or no bounds on the variables. Both the primal and dual solutions are returned as output parameters. The package has many optional features. Among them is the ability to save partial results and then use them to continue the computation at a later time.
ERIC Educational Resources Information Center
Svartberg, Martin; Stiles, Tore C.
1992-01-01
Examined therapist competence and patient-therapist complementarity as to their interrelation and their unique, collective, and interactive contributions to patient change in 20 sessions of short-term anxiety-provoking psychotherapy. Found that competence in early sessions did not relate to patient change. Patient-therapist complementarity ratings…
Niklaus, Pascal A; Baruffol, Martin; He, Jin-Sheng; Ma, Keping; Schmid, Bernhard
2017-04-01
Most experimental biodiversity-ecosystem functioning research to date has addressed herbaceous plant communities. Comparably little is known about how forest communities will respond to species losses, despite their importance for global biogeochemical cycling. We studied tree species interactions in experimental subtropical tree communities with 33 distinct tree species mixtures and one, two, or four species. Plots were either exposed to natural light levels or shaded. Trees grew rapidly and were intensely competing above ground after 1.5 growing seasons when plots were thinned and the vertical distribution of leaves and wood determined by separating the biomass of harvested trees into 50 cm height increments. Our aim was to analyze effects of species richness in relation to the vertical allocation of leaf biomass and wood, with an emphasis on bipartite competitive interactions among species. Aboveground productivity increased with species richness. The community-level vertical leaf and wood distribution depended on the species composition of communities. Mean height and breadth of species-level vertical leaf and wood distributions did not change with species richness. However, the extra biomass produced by mixtures compared to monocultures of the component species increased when vertical leaf distributions of monocultures were more different. Decomposition of biodiversity effects with the additive partitioning scheme indicated positive complementarity effects that were higher in light than in shade. Selection effects did not deviate from zero, irrespective of light levels. Vertical leaf distributions shifted apart in mixed stands as consequence of competition-driven phenotypic plasticity, promoting realized complementarity. Structural equation models showed that this effect was larger for species that differed more in growth strategies that were characterized by functional traits. In 13 of the 18 investigated two-species mixtures, both species benefitted
An Assessment of Linear Versus Non-linear Multigrid Methods for Unstructured Mesh Solvers
2001-05-01
problems is investigated. The first case consists of a transient radiation-diffusion problem for which an exact linearization is available, while the...to the Jacobian of a second-order accurate discretization. When an exact linearization is employed, the linear and non-linear multigrid methods
Superconducting linear actuator
NASA Technical Reports Server (NTRS)
Johnson, Bruce; Hockney, Richard
1993-01-01
Special actuators are needed to control the orientation of large structures in space-based precision pointing systems. Electromagnetic actuators that presently exist are too large in size and their bandwidth is too low. Hydraulic fluid actuation also presents problems for many space-based applications. Hydraulic oil can escape in space and contaminate the environment around the spacecraft. A research study was performed that selected an electrically-powered linear actuator that can be used to control the orientation of a large pointed structure. This research surveyed available products, analyzed the capabilities of conventional linear actuators, and designed a first-cut candidate superconducting linear actuator. The study first examined theoretical capabilities of electrical actuators and determined their problems with respect to the application and then determined if any presently available actuators or any modifications to available actuator designs would meet the required performance. The best actuator was then selected based on available design, modified design, or new design for this application. The last task was to proceed with a conceptual design. No commercially-available linear actuator or modification capable of meeting the specifications was found. A conventional moving-coil dc linear actuator would meet the specification, but the back-iron for this actuator would weigh approximately 12,000 lbs. A superconducting field coil, however, eliminates the need for back iron, resulting in an actuator weight of approximately 1000 lbs.
Dashivets, Tetyana; Stracke, Jan; Dengl, Stefan; Knaupp, Alexander; Pollmann, Jan; Buchner, Johannes; Schlothauer, Tilman
2016-01-01
ABSTRACT Therapeutic antibodies can undergo a variety of chemical modification reactions in vitro. Depending on the site of modification, either antigen binding or Fc-mediated functions can be affected. Oxidation of tryptophan residues is one of the post-translational modifications leading to altered antibody functionality. In this study, we examined the structural and functional properties of a therapeutic antibody construct and 2 affinity matured variants thereof. Two of the 3 antibodies carry an oxidation-prone tryptophan residue in the complementarity-determining region of the VL domain. We demonstrate the differences in the stability and bioactivity of the 3 antibodies, and reveal differential degradation pathways for the antibodies susceptible to oxidation. PMID:27612038
Complementarity of weak lensing and peculiar velocity measurements in testing general relativity
Song, Yong-Seon; Zhao Gongbo; Bacon, David; Koyama, Kazuya; Nichol, Robert C.; Pogosian, Levon
2011-10-15
We explore the complementarity of weak lensing and galaxy peculiar velocity measurements to better constrain modifications to General Relativity. We find no evidence for deviations from General Relativity on cosmological scales from a combination of peculiar velocity measurements (for Luminous Red Galaxies in the Sloan Digital Sky Survey) with weak lensing measurements (from the Canadian France Hawaii Telescope Legacy Survey). We provide a Fisher error forecast for a Euclid-like space-based survey including both lensing and peculiar velocity measurements and show that the expected constraints on modified gravity will be at least an order of magnitude better than with present data, i.e. we will obtain {approx_equal}5% errors on the modified gravity parametrization described here. We also present a model-independent method for constraining modified gravity parameters using tomographic peculiar velocity information, and apply this methodology to the present data set.
Technology Transfer Automated Retrieval System (TEKTRAN)
Complementary resource use and redundancy of species that fulfil the same ecological role are two mechanisms that can increase and stabilize process rates in ecosystems. For example, predator complementarity and redundancy can determine prey consumption rates, in some cases providing invaluable cont...
ERIC Educational Resources Information Center
Scupola, Ada
1999-01-01
Discussion of the publishing industry and its use of information and communication technologies focuses on the way in which electronic-commerce technologies are changing and could change the publishing processes, and develops a business complementarity model of electronic publishing to maximize profitability and improve the competitive position.…
Meinnel, T; Sacerdot, C; Graffe, M; Blanquet, S; Springer, M
1999-07-23
Translation initiation factor IF3, one of three factors specifically required for translation initiation in Escherichia coli, inhibits initiation on any codon other than the three canonical initiation codons, AUG, GUG, or UUG. This discrimination against initiation on non-canonical codons could be due to either direct recognition of the two last bases of the codon and their cognate bases on the anticodon or to some ability to "feel" codon-anticodon complementarity. To investigate the importance of codon-anticodon complementarity in the discriminatory role of IF3, we constructed a derivative of tRNALeuthat has all the known characteristics of an initiator tRNA except the CAU anticodon. This tRNA is efficiently formylated by methionyl-tRNAfMettransformylase and charged by leucyl-tRNA synthetase irrespective of the sequence of its anticodon. These initiator tRNALeuderivatives (called tRNALI) allow initiation at all the non-canonical codons tested, provided that the complementarity between the codon and the anticodon of the initiator tRNALeuis respected. More remarkably, the discrimination by IF3, normally observed with non-canonical codons, is neutralised if a tRNALIcarrying a complementary anticodon is used for initiation. This suggests that IF3 somehow recognises codon-anticodon complementarity, at least at the second and third position of the codon, rather than some specific bases in either the codon or the anticodon.
ERIC Educational Resources Information Center
Stroup, Walter M.; Wilensky, Uri
2014-01-01
Placed in the larger context of broadening the engagement with systems dynamics and complexity theory in school-aged learning and teaching, this paper is intended to introduce, situate, and illustrate--with results from the use of network supported participatory simulations in classrooms--a stance we call "embedded complementarity" as an…
Kelly, Emily L A; Eynaud, Yoan; Clements, Samantha M; Gleason, Molly; Sparks, Russell T; Williams, Ivor D; Smith, Jennifer E
2016-12-01
Patterns of species resource use provide insight into the functional roles of species and thus their ecological significance within a community. The functional role of herbivorous fishes on coral reefs has been defined through a variety of methods, but from a grazing perspective, less is known about the species-specific preferences of herbivores on different groups of reef algae and the extent of dietary overlap across an herbivore community. Here, we quantified patterns of redundancy and complementarity in a highly diverse community of herbivores at a reef on Maui, Hawaii, USA. First, we tracked fish foraging behavior in situ to record bite rate and type of substrate bitten. Second, we examined gut contents of select herbivorous fishes to determine consumption at a finer scale. Finally, we placed foraging behavior in the context of resource availability to determine how fish selected substrate type. All species predominantly (73-100 %) foraged on turf algae, though there were differences among the types of macroalgae and other substrates bitten. Increased resolution via gut content analysis showed the composition of turf algae consumed by fishes differed across herbivore species. Consideration of foraging behavior by substrate availability revealed 50 % of herbivores selected for turf as opposed to other substrate types, but overall, there were variable foraging portfolios across all species. Through these three methods of investigation, we found higher complementarity among herbivorous fishes than would be revealed using a single metric. These results suggest differences across species in the herbivore "rain of bites" that graze and shape benthic community composition.
Linear Algebraic Method for Non-Linear Map Analysis
Yu,L.; Nash, B.
2009-05-04
We present a newly developed method to analyze some non-linear dynamics problems such as the Henon map using a matrix analysis method from linear algebra. Choosing the Henon map as an example, we analyze the spectral structure, the tune-amplitude dependence, the variation of tune and amplitude during the particle motion, etc., using the method of Jordan decomposition which is widely used in conventional linear algebra.
Linear Programming Problems for Generalized Uncertainty
ERIC Educational Resources Information Center
Thipwiwatpotjana, Phantipa
2010-01-01
Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…
Linear Programming across the Curriculum
ERIC Educational Resources Information Center
Yoder, S. Elizabeth; Kurz, M. Elizabeth
2015-01-01
Linear programming (LP) is taught in different departments across college campuses with engineering and management curricula. Modeling an LP problem is taught in every linear programming class. As faculty teaching in Engineering and Management departments, the depth to which teachers should expect students to master this particular type of…
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Colgate, S.A.
1958-05-27
An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Context image for PIA03667 Linear Clouds
These clouds are located near the edge of the south polar region. The cloud tops are the puffy white features in the bottom half of the image.
Image information: VIS instrument. Latitude -80.1N, Longitude 52.1E. 17 meter/pixel resolution.
Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.
NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.
NASA Astrophysics Data System (ADS)
Liolios, K.; Georgiev, I.; Liolios, A.
2012-10-01
A numerical approach for a problem arising in Civil and Environmental Engineering is presented. This problem concerns the dynamic soil-pipeline interaction, when unilateral contact conditions due to tensionless and elastoplastic softening/fracturing behaviour of the soil as well as due to gapping caused by earthquake excitations are taken into account. Moreover, soil-capacity degradation due to environmental effects are taken into account. The mathematical formulation of this dynamic elastoplasticity problem leads to a system of partial differential equations with equality domain and inequality boundary conditions. The proposed numerical approach is based on a double discretization, in space and time, and on mathematical programming methods. First, in space the finite element method (FEM) is used for the simulation of the pipeline and the unilateral contact interface, in combination with the boundary element method (BEM) for the soil simulation. Concepts of the non-convex analysis are used. Next, with the aid of Laplace transform, the equality problem conditions are transformed to convolutional ones involving as unknowns the unilateral quantities only. So the number of unknowns is significantly reduced. Then a marching-time approach is applied and a non-convex linear complementarity problem is solved in each time-step.
NASA Astrophysics Data System (ADS)
Liang, Yeong-Cherng; Spekkens, Robert W.; Wiseman, Howard M.
2011-09-01
In 1960, the mathematician Ernst Specker described a simple example of nonclassical correlations, the counter-intuitive features of which he dramatized using a parable about a seer, who sets an impossible prediction task to his daughter’s suitors. We revisit this example here, using it as an entrée to three central concepts in quantum foundations: contextuality, Bell-nonlocality, and complementarity. Specifically, we show that Specker’s parable offers a narrative thread that weaves together a large number of results, including the following: the impossibility of measurement-noncontextual and outcome-deterministic ontological models of quantum theory (the 1967 Kochen-Specker theorem), in particular, the recent state-specific pentagram proof of Klyachko; the impossibility of Bell-local models of quantum theory (Bell’s theorem), especially the proofs by Mermin and Hardy and extensions thereof; the impossibility of a preparation-noncontextual ontological model of quantum theory; the existence of triples of positive operator valued measures (POVMs) that can be measured jointly pairwise but not triplewise. Along the way, several novel results are presented: a generalization of a theorem by Fine connecting the existence of a joint distribution over outcomes of counterfactual measurements to the existence of a measurement-noncontextual and outcome-deterministic ontological model; a generalization of Klyachko’s proof of the Kochen-Specker theorem from pentagrams to a family of star polygons; a proof of the Kochen-Specker theorem in the style of Hardy’s proof of Bell’s theorem (i.e., one that makes use of the failure of the transitivity of implication for counterfactual statements); a categorization of contextual and Bell-nonlocal correlations in terms of frustrated networks; a derivation of a new inequality testing preparation noncontextuality; some novel results on the joint measurability of POVMs and the question of whether these can be modeled
Katayama, C D; Eidelman, F J; Duncan, A; Hooshmand, F; Hedrick, S M
1995-01-01
The antigen receptor on T cells (TCR) has been predicted to have a structure similar to a membrane-anchored form of an immunoglobulin F(ab) fragment. Virtually all of the conserved amino acids that are important for inter- and intramolecular interactions in the VH-VL pair are also conserved in the TCR V alpha and V beta chains. A molecular model of the TCR has been constructed by homology and we have used the information from this, as well as the earlier structural predictions of others, to study the basis for specificity. Specifically, regions of a TCR cloned from an antigen-specific T cell were stitched into the corresponding framework of a second TCR. Results indicate that the substitution of amino acid sequences corresponding to the complementarity determining regions (CDRs) of immunoglobulin can convey the specificity for antigen and major histocompatibility complex molecules. These data are consistent with a role, but not an exclusive role, for CDR3 in antigen peptide recognition. Images PMID:7534228
Low and high energy phenomenology of quark-lepton complementarity scenarios
Hochmuth, Kathrin A.; Rodejohann, Werner
2007-04-01
We conduct a detailed analysis of the phenomenology of two predictive seesaw scenarios leading to quark-lepton complementarity. In both cases we discuss the neutrino mixing observables and their correlations, neutrinoless double beta decay and lepton flavor violating decays such as {mu}{yields}e{gamma}. We also comment on leptogenesis. The first scenario is disfavored on the level of one to two standard deviations, in particular, due to its prediction for |U{sub e3}|. There can be resonant leptogenesis with quasidegenerate heavy and light neutrinos, which would imply sizable cancellations in neutrinoless double beta decay. The decays {mu}{yields}e{gamma} and {tau}{yields}{mu}{gamma} are typically observable unless the SUSY masses approach the TeV scale. In the second scenario leptogenesis is impossible. It is, however, in perfect agreement with all oscillation data. The prediction for {mu}{yields}e{gamma} is in general too large, unless the SUSY masses are in the range of several TeV. In this case {tau}{yields}e{gamma} and {tau}{yields}{mu}{gamma} are unobservable.
NASA Astrophysics Data System (ADS)
Leliaert, J.; Eberbeck, D.; Liebl, M.; Coene, A.; Steinhoff, U.; Wiekhorst, F.; Van Waeyenberge, B.; Dupré, L.
2017-03-01
Magnetorelaxometry and thermal magnetic noise spectroscopy are two magnetic characterization techniques enabling one to estimate the magnetic nanoparticle hydrodynamic size distribution. Both techniques are based on the same physical principle, i.e. the thermal fluctuations of the magnetic moment. In the case of magnetorelaxometry these fluctuations give rise to a relaxing magnetic moment after an externally applied magnetic field is switched off, whereas thermal magnetic noise spectra are measured in the absence of any external excitation. Hence, thermal magnetic noise spectroscopy is an equilibrium measurement technique. Here, we compare the similarity and complementarity of both methods and conclude that, for particles within both methods’ sensitivity range, they give the same estimate for the size distribution. For small particles (or samples with low viscosities), the used setup is not sufficiently sensitive to accurately estimate the size distribution from the relaxometry signal whereas this is still possible with thermal magnetic noise spectroscopy. For larger particles, however, magnetorelaxometry is the preferred method because of its higher signal to noise ratio and faster measurement time.
Bianga, Juliusz; Bouslimani, Amina; Bec, Nicole; Quenet, François; Mounicou, Sandra; Szpunar, Joanna; Bouyssiere, Brice; Lobinski, Ryszard; Larroque, Christian
2014-08-01
The follow-up of the Heated Intraoperative Chemotherapy (HIPEC) of peritoneal carcinomatosis would benefit from the monitoring of the penetration, distribution and metabolism of the drug within the tumor. As tumor nodules can be resected during the therapy, mass spectrometry imaging is a suitable tool for the evaluation of treatment efficacy, and, as a result, the therapy can be re-optimized. In this work we demonstrate the complementarity of laser ablation (LA) ICP mass spectrometry and MALDI imaging to study the penetration and distribution of two Pt-based metallodrugs (cisplatin and oxaliplatin) in human tumor samples removed from patients diagnosed with colorectal or ovarian peritoneal carcinomatosis. LA ICP MS offered sensitive (LOD for (195)Pt 4.8 pg s(-1)) imaging of platinum quasi-independently of the original species and the sample matrix and thus an ultimate way of verifying the penetration of the Pt-containing drug or its moieties into the tumor. MALDI imaging was found to suffer in some cases from signal suppression by the matrix leading to false negatives. In the case of the oxaliplatin metallodrug, the results obtained from ICP and MALDI MS imaging were coherent whereas in the case of cisplatin, species detected by ICP MS imaging could not be validated by MALDI MS. The study is the first application of the dual ICP and MALDI MS imaging to the follow-up of metallodrugs in human tumors.
Ligand Binding Site Detection by Local Structure Alignment and Its Performance Complementarity
Lee, Hui Sun; Im, Wonpil
2013-01-01
Accurate determination of potential ligand binding sites (BS) is a key step for protein function characterization and structure-based drug design. Despite promising results of template-based BS prediction methods using global structure alignment (GSA), there is a room to improve the performance by properly incorporating local structure alignment (LSA) because BS are local structures and often similar for proteins with dissimilar global folds. We present a template-based ligand BS prediction method using G-LoSA, our LSA tool. A large benchmark set validation shows that G-LoSA predicts drug-like ligands’ positions in single-chain protein targets more precisely than TM-align, a GSA-based method, while the overall success rate of TM-align is better. G-LoSA is particularly efficient for accurate detection of local structures conserved across proteins with diverse global topologies. Recognizing the performance complementarity of G-LoSA to TM-align and a non-template geometry-based method, fpocket, a robust consensus scoring method, CMCS-BSP (Complementary Methods and Consensus Scoring for ligand Binding Site Prediction), is developed and shows improvement on prediction accuracy. The G-LoSA source code is freely available at http://im.bioinformatics.ku.edu/GLoSA. PMID:23957286
Zhu, Dan H.; Wang, Ping; Zhang, Wei Z.; Yuan, Yue; Li, Bin; Wang, Jiang
2015-01-01
Background Although plant diversity is postulated to resist invasion, studies have not provided consistent results, most of which were ascribed to the influences of other covariate environmental factors. Methodology/Principal Findings To explore the mechanisms by which plant diversity influences community invasibility, an experiment was conducted involving grassland sites varying in their species richness (one, two, four, eight, and sixteen species). Light interception efficiency and soil resources (total N, total P, and water content) were measured. The number of species, biomass, and the number of seedlings of the invading species decreased significantly with species richness. The presence of Patrinia scabiosaefolia Fisch. ex Trev. and Mosla dianthera (Buch.-Ham. ex Roxburgh) Maxim. significantly increased the resistance of the communities to invasion. A structural equation model showed that the richness of planted species had no direct and significant effect on invasion. Light interception efficiency had a negative effect on the invasion whereas soil water content had a positive effect. In monocultures, Antenoron filiforme (Thunb.) Rob. et Vaut. showed the highest light interception efficiency and P. scabiosaefolia recorded the lowest soil water content. With increased planted-species richness, a greater percentage of pots showed light use efficiency higher than that of A. filiforme and a lower soil water content than that in P. scabiosaefolia. Conclusions/Significance The results of this study suggest that plant diversity confers resistance to invasion, which is mainly ascribed to the sampling effect of particular species and the complementarity effect among species on resources use. PMID:26556713
NASA Astrophysics Data System (ADS)
Pandya, Palash; Misra, Avijit; Chakrabarty, Indranil
2016-11-01
We find a single parameter family of genuinely entangled three-qubit pure states, called the maximally Bell-inequality violating states (MBV), which exhibit maximum Bell-inequality violation by the reduced bipartite system for a fixed amount of genuine tripartite entanglement quantified by the so-called tangle measure. This in turn implies that there holds a complementary relation between the Bell-inequality violation by the reduced bipartite systems and the tangle present in the three-qubit states, not necessarily pure. The MBV states also exhibit maximum Bell-inequality violation by the reduced bipartite systems of the three-qubit pure states with a fixed amount of genuine tripartite correlation quantified by the generalized geometric measure, a genuine entanglement measure of multiparty pure states, and the discord monogamy score, a multipartite quantum correlation measure from information-theoretic paradigm. The aforementioned complementary relation has also been established for three-qubit pure states for the generalized geometric measure and the discord monogamy score, respectively. The complementarity between the Bell-inequality violation by the reduced bipartite systems and the genuine tripartite correlation suggests that the Bell-inequality violation in the reduced two-qubit system comes at the cost of the total tripartite correlation present in the entire system.
PHYSICS OF PREDETERMINED EVENTS: Complementarity States of Choice-Chance Mechanics
NASA Astrophysics Data System (ADS)
Morales, Manuel
2011-04-01
We find that the deterministic application of choice-chance mechanics, as applied in the Tempt Destiny experiment, is also reflected in the construct of the double-slit experiment and that the complementary results obtained by this treatment mirror that of Niels Bohr's principle of complementarity as well as reveal Einstein's hidden variables. Whereas the double-slit experiment serves to reveal the deterministic and indeterministic behavioral characteristics of our physical world, the Tempt Destiny experiment serves to reveal the deterministic and indeterministic behavioral characteristics of our actions. The unifying factor shared by both experiments is that they are of the same construct yielding similar results from the same energy. Given that, we seek to establish if the fundamental states of energy, i.e, certainty and probability, are indeed predetermined. Over the span of ten years, the Tempt Destiny experimental model of pairing choice and chance events has statistically obtained consistent results of absolute value. The evidence clearly infers that the fundamental mechanics of energy is a complement of two mutually exclusive mechanisms that bring into being - as opposed to revealing - the predetermined state of an event as either certain or probable, although not both simultaneously.
Volatile fractionation in the early solar system and chondrule/matrix complementarity
Bland, Philip A.; Alard, Olivier; Benedix, Gretchen K.; Kearsley, Anton T.; Menzies, Olwyn N.; Watt, Lauren E.; Rogers, Nick W.
2005-01-01
Bulk chondritic meteorites and terrestrial planets show a monotonic depletion in moderately volatile and volatile elements relative to the Sun's photosphere and CI carbonaceous chondrites. Although volatile depletion was the most fundamental chemical process affecting the inner solar nebula, debate continues as to its cause. Carbonaceous chondrites are the most primitive rocks available to us, and fine-grained, volatile-rich matrix is the most primitive component in these rocks. Several volatile depletion models posit a pristine matrix, with uniform CI-like chemistry across the different chondrite groups. To understand the nature of volatile fractionation, we studied minor and trace element abundances in fine-grained matrices of a variety of carbonaceous chondrites. We find that matrix trace element abundances are characteristic for a given chondrite group; they are depleted relative to CI chondrites, but are enriched relative to bulk compositions of their parent meteorites, particularly in volatile siderophile and chalcophile elements. This enrichment produces a highly nonmonotonic trace element pattern that requires a complementary depletion in chondrule compositions to achieve a monotonic bulk. We infer that carbonaceous chondrite matrices are not pristine: they formed from a material reservoir that was already depleted in volatile and moderately volatile elements. Additional thermal processing occurred during chondrule formation, with exchange of volatile siderophile and chalcophile elements between chondrules and matrix. This chemical complementarity shows that these chondritic components formed in the same nebula region. PMID:16174733
The Space Infrared Interferometric Telescope (SPIRIT) and its Complementarity to ALMA
NASA Technical Reports Server (NTRS)
Leisawitz, Dave
2007-01-01
We report results of a pre-Formulation Phase study of SPIRIT, a candidate NASA Origins Probe mission. SPIRIT is a spatial and spectral interferometer with an operating wavelength range 25 - 400 microns. SPIRIT will provide sub-arcsecond resolution images and spectra with resolution R = 3000 in a 1 arcmin field of view to accomplish three primary scientific objectives: (1) Learn how planetary systems form from protostellar disks, and how they acquire their chemical organization; (2) Characterize the family of extrasolar planetary systems by imaging the structure in debris disks to understand how and where planets of different types form; and (3) Learn how high-redshift galaxies formed and merged to form the present-day population of galaxies. In each of these science domains, SPIRIT will yield information complementary to that obtainable with the James Webb Space Telescope (JWST)and the Atacama Large Millimeter Array (ALMA), and all three observatories could operate contemporaneously. Here we shall emphasize the SPIRIT science goals (1) and (2) and the mission's complementarity with ALMA.
McArt, Scott H; Cook-Patton, Susan C; Thaler, Jennifer S
2012-04-01
Biodiversity is quantified via richness (e.g., the number of species), evenness (the relative abundance distribution of those species), or proportional diversity (a combination of richness and evenness, such as the Shannon index, H'). While empirical studies show no consistent relationship between these aspects of biodiversity within communities, the mechanisms leading to inconsistent relationships have received little attention. Here, using common evening primrose (Oenothera biennis) and its associated arthropod community, we show that relationships between arthropod richness, evenness, and proportional diversity are altered by plant genotypic richness. Arthropod richness increased with O. biennis genotypic richness due to an abundance-driven accumulation of species in response to greater plant biomass. Arthropod evenness and proportional diversity decreased with plant genotypic richness due to a nonadditive increase in abundance of a dominant arthropod, the generalist florivore/omnivore Plagiognathas politus (Miridae). The greater quantity of flowers and buds produced in polycultures-which resulted from positive complementarity among O. biennis genotypes-increased the abundance of this dominant insect. Using choice bioassays, we show that floral quality did not change in plant genotypic mixtures. These results elucidate mechanisms for how plant genotypic richness can modify relationships between arthropod richness, evenness, and proportional diversity. More broadly, our results suggest that trophic interactions may be a previously underappreciated factor controlling relationships between these different aspects of biodiversity.
Roh, Jooho; Byun, Sung June; Seo, Youngsil; KIm, Minjae; Lee, Jae-Ho; Kim, Songmi; Lee, Yuno; Lee, Keun Woo; Kim, Jin-Kyoo; Kwon, Myung-Hee
2015-02-01
In contrast to a number of studies on the humanization of non-human antibodies, the reshaping of a non-human antibody into a chicken antibody has never been attempted. Therefore, nothing is known about the animal species-dependent compatibility of the framework regions (FRs) that sustain the appropriate conformation of the complementarity-determining regions (CDRs). In this study, we attempted the reshaping of the variable domains of the mouse catalytic anti-nucleic acid antibody 3D8 (m3D8) into the FRs of a chicken antibody (“chickenization”) by CDR grafting, which is a common method for the humanization of antibodies. CDRs of the acceptor chicken antibody that showed a high homology to the FRs of m3D8 were replaced with those of m3D8, resulting in the chickenized antibody (ck3D8). ck3D8 retained the biochemical properties (DNA binding, DNA hydrolysis, and cellular internalizing activities) and three-dimensional structure of m3D8 and showed reduced immunogenicity in chickens. Our study demonstrates that CDR grafting can be applied to the chickenization of a mouse antibody, probably due to the interspecies compatibility of the FRs.
López-Madrigal, Sergio; Beltrà, Aleixandre; Resurrección, Serena; Soto, Antonia; Latorre, Amparo; Moya, Andrés; Gil, Rosario
2014-01-01
Intracellular bacterial supply of essential amino acids is common among sap-feeding insects, thus complementing the scarcity of nitrogenous compounds in plant phloem. This is also the role of the two mealybug endosymbiotic systems whose genomes have been sequenced. In the nested endosymbiotic system from Planococcus citri (Pseudococcinae), “Candidatus Tremblaya princeps” and “Candidatus Moranella endobia” cooperate to synthesize essential amino acids, while in Phenacoccus avenae (Phenacoccinae) this function is performed by its single endosymbiont “Candidatus Tremblaya phenacola.” However, little is known regarding the evolution of essential amino acid supplementation strategies in other mealybug systems. To address this knowledge gap, we screened for the presence of six selected loci involved in essential amino acid biosynthesis in five additional mealybug species. We found evidence of ongoing complementarity among endosymbionts from insects of subfamily Pseudococcinae, as well as horizontal gene transfer affecting endosymbionts from insects of family Phenacoccinae, providing a more comprehensive picture of the evolutionary history of these endosymbiotic systems. Additionally, we report two diagnostic motifs to help identify invasive mealybug species. PMID:25206351
Venail, Patrick A; Vives, Martha J
2013-01-01
Despite their importance as ecosystem drivers, our understanding of the influence of bacterial diversity on ecosystem functioning is limited. After identifying twelve bacterial strains from two petroleum-contaminated sites, we experimentally explored the impact of biodiversity on total density by manipulating the number of strains in culture. Irrespective of the origin of the bacteria relative to the contaminant, biodiversity positively influenced total density. However, bacteria cultured in the crude oil of their origin (autochthonous) reached higher densities than bacteria from another origin (allochthonous) and the relationship between diversity and density was stronger for autochthonous bacteria. By measuring the relative contribution of each strain to total density we showed that the observed positive effect of increasing diversity on total density was mainly due to positive interactions among species and not the presence of a particular species. Our findings can be explained by the complex chemical composition of crude oil and the necessity of a diverse array of organisms with complementary enzymatic capacities to achieve its degradation. The long term exposure to a contaminant may have allowed different bacteria to become adapted to the use of different fractions of the crude, resulting in higher complementarity in resource use in autochthonous bacteria compared to allochthonous ones. Our results could help improve the success of bioaugmentation as a bioremediation technique by suggesting the use of a diversified set of autochthonous organisms.
NASA Astrophysics Data System (ADS)
Borga, Marco; Baptiste, François; Zoccatelli, Davide
2016-04-01
High penetration of climate related energy sources (such as solar and small hydropower) might be facilitated by using their complementarity in order to increase the balance between energy load and generation. In this study we examine and map the complementarity between solar PV and run-of-the-river energy along the river network of catchments in the Eastern Italian Alps which are significantly affected by glaciers. We analyze energy sources complementarity across different temporal scales using two indicators: the standard deviation of the energy balance and the theoretical storage required for balancing generation and load (François et a., 2016). Temporal scales ranging from hours to years are assessed. By using a glacio-hydrological model able to simulate both the glacier and hydrology dynamics, we analyse the sensitivity of the obtained results with respect to different scenarios of glacier retreat. Reference: François, B., Hingray, B., Raynaud, D., Borga, M., Creutin, J.D., 2016: Increasing climate-related-energy penetration by integrating run-of-the river hydropower to wind/solar mix. Renewable Energy, 87, 686-696.
Roorda, Debora L; Koomen, Helma M Y; Spilt, Jantine L; Thijs, Jochem T; Oort, Frans J
2013-02-01
The present study investigated whether the complementarity principle (mutual interactive behaviors are opposite on control and similar on affiliation) applies to teacher-child interactions within the kindergarten classroom. Furthermore, it was examined whether interactive behaviors and complementarity depended on children's externalizing and internalizing behaviors, interaction time, and interaction frequency. A total of 48 teachers and 179 selected kindergartners with a variety of externalizing and internalizing behaviors were observed in a small group task setting in the natural ecology of the classroom. Teachers' and children's interactive behaviors were rated by independent observers. Teachers reported about children's externalizing and internalizing behaviors. Multilevel analyses indicated that both teachers and children reacted complementarily on the control dimension but not on the affiliation dimension. Teachers showed more control and more affiliation toward children with higher levels of internalizing behavior. In addition, teachers displayed less affiliation toward children with higher levels of externalizing behavior, whereas those children did not show less affiliation themselves. Teachers' and children's complementarity tendencies on control were weaker if children had higher levels of externalizing behavior.
Carroll, Linda J.; Rothe, J. Peter
2010-01-01
Like other areas of health research, there has been increasing use of qualitative methods to study public health problems such as injuries and injury prevention. Likewise, the integration of qualitative and quantitative research (mixed-methods) is beginning to assume a more prominent role in public health studies. Likewise, using mixed-methods has great potential for gaining a broad and comprehensive understanding of injuries and their prevention. However, qualitative and quantitative research methods are based on two inherently different paradigms, and their integration requires a conceptual framework that permits the unity of these two methods. We present a theory-driven framework for viewing qualitative and quantitative research, which enables us to integrate them in a conceptually sound and useful manner. This framework has its foundation within the philosophical concept of complementarity, as espoused in the physical and social sciences, and draws on Bergson’s metaphysical work on the ‘ways of knowing’. Through understanding how data are constructed and reconstructed, and the different levels of meaning that can be ascribed to qualitative and quantitative findings, we can use a mixed-methods approach to gain a conceptually sound, holistic knowledge about injury phenomena that will enhance our development of relevant and successful interventions. PMID:20948937
Carroll, Linda J; Rothe, J Peter
2010-09-01
Like other areas of health research, there has been increasing use of qualitative methods to study public health problems such as injuries and injury prevention. Likewise, the integration of qualitative and quantitative research (mixed-methods) is beginning to assume a more prominent role in public health studies. Likewise, using mixed-methods has great potential for gaining a broad and comprehensive understanding of injuries and their prevention. However, qualitative and quantitative research methods are based on two inherently different paradigms, and their integration requires a conceptual framework that permits the unity of these two methods. We present a theory-driven framework for viewing qualitative and quantitative research, which enables us to integrate them in a conceptually sound and useful manner. This framework has its foundation within the philosophical concept of complementarity, as espoused in the physical and social sciences, and draws on Bergson's metaphysical work on the 'ways of knowing'. Through understanding how data are constructed and reconstructed, and the different levels of meaning that can be ascribed to qualitative and quantitative findings, we can use a mixed-methods approach to gain a conceptually sound, holistic knowledge about injury phenomena that will enhance our development of relevant and successful interventions.
Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel
ERIC Educational Resources Information Center
El-Gebeily, M.; Yushau, B.
2008-01-01
In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…
Stachowicz, John J; Best, Rebecca J; Bracken, Matthew E S; Graham, Michael H
2008-12-02
Mounting concern over the loss of marine biodiversity has increased the urgency of understanding its consequences. This urgency spurred the publication of many short-term studies, which often report weak effects of diversity (species richness) driven by the presence of key species (the sampling effect). Longer-term field experiments are slowly accumulating, and they more often report strong diversity effects driven by species complementarity, calling into question the generality of earlier findings. However, differences among study systems in which short- and long-term studies are conducted currently limit our ability to assess whether these differences are simply due to biological or environmental differences among systems. In this paper, we compared the effect of intertidal seaweed species richness on biomass accumulation in mesocosms and field experiments using the same pool of species. We found that seaweed species richness increased biomass accumulation in field experiments in both short (2-month) and long (3-year) experiments, although effects were stronger in the long-term experiment. In contrast, richness had no effect in mesocosm experiments, where biomass accumulation was completely a function of species identity. We argue that the short-term experiments, like many published experiments on the topic, detect only a subset of possible mechanisms that operate in the field over the longer term because they lack sufficient environmental heterogeneity to allow expression of niche differences, and they are of insufficient length to capture population-level responses, such as recruitment. Many published experiments, therefore, likely underestimate the strength of diversity on ecosystem processes in natural ecosystems.
Complementarity of ResourceSat-1 AWiFS and Landsat TM/ETM+ sensors
Goward, S.N.; Chander, G.; Pagnutti, M.; Marx, A.; Ryan, R.; Thomas, N.; Tetrault, R.
2012-01-01
Considerable interest has been given to forming an international collaboration to develop a virtual moderate spatial resolution land observation constellation through aggregation of data sets from comparable national observatories such as the US Landsat, the Indian ResourceSat and related systems. This study explores the complementarity of India's ResourceSat-1 Advanced Wide Field Sensor (AWiFS) with the Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+). The analysis focuses on the comparative radiometry, geometry, and spectral properties of the two sensors. Two applied assessments of these data are also explored to examine the strengths and limitations of these alternate sources of moderate resolution land imagery with specific application domains. There are significant technical differences in these imaging systems including spectral band response, pixel dimensions, swath width, and radiometric resolution which produce differences in observation data sets. None of these differences was found to strongly limit comparable analyses in agricultural and forestry applications. Overall, we found that the AWiFS and Landsat TM/ETM+ imagery are comparable and in some ways complementary, particularly with respect to temporal repeat frequency. We have found that there are limits to our understanding of the AWiFS performance, for example, multi-camera design and stability of radiometric calibration over time, that leave some uncertainty that has been better addressed for Landsat through the Image Assessment System and related cross-sensor calibration studies. Such work still needs to be undertaken for AWiFS and similar observatories that may play roles in the Global Earth Observation System of Systems Land Surface Imaging Constellation.
ZHONG, GEN-SHEN; WU, MIN-NA; GUO, XIAO-FANG; XU, ZHI-SHAN; ZHANG, SHENG-HUA; ZHEN, YONG-SU
2013-01-01
Gelatinases are overexpressed in several types of maligancies and tumor stromal cells. Lidamycin is an enediyne antitumor antibiotic, which is composed of an apoprotein (LDP) and an active chromophore (AE). It is known that the heavy-chain complementarity-determining region-3 (CDR3) domain of scFv is important in antibody affinity. The aim of this study was to prepare the enediyne-energized fusion proteins with a heavy-chain CDR3 domain of anti-gelatinases scFv and lidamycin, and to evaluate their antitumor efficiency. Fusion proteins comprising the CDR3 domain and the lidamycin apoprotein were generated, and ELISA, immunofluorescence and FACS were used to analyze the binding of the fusion protein with antigen gelatinases. The purified fusion proteins were assembled with the lidamycin chromophore, and the antitumor effects were evaluated in vitro and in vivo. It was found that the CDR3-LDP and CDR3-LDP-CDR3 fusion proteins demonstrated high affinity towards antigen gelatinases. Following stimulation of CDR3-LDP with enediyne, the results of MTT showed potent cytotoxicity towards tumor cells; the IC50 values of CDR3-LDP-AE to HepG2 and Bel-7402 tumor cells were 1.05×10−11 and 6.6×10−14 M, respectively. In addition, CDR3-LDP-AE displayed a potent antitumor effect in H22 cell xenografts in mice; the combination of CDR3-LDP (10 mg/kg) and CDR3-LDP-AE (0.25 and 0.5 mg/kg) revealed that the tumor inhibitory rates were 85.2 and 92.7%, respectively (P<0.05 compared with CDR3-LDP-AE). In conclusion, these results suggest that the CDR3-LDP fusion protein and its analog CDR3-LDP-AE may both be promising candidates for tumor targeting therapy. PMID:23599760
The traffic equilibrium problem with nonadditive path costs
Gabriel, S.A.; Bernstein, D.
1995-08-21
In this paper the authors present a version of the (static) traffic equilibrium problem in which the cost incurred on a path is not simply the sum of the costs on the arcs that constitute that path. The authors motivate this nonadditive version of the problem by describing several situations in which the classical additivity assumption fails. They also present an algorithm for solving nonadditive problems that is based on the recent NE/SQP algorithm, a fast and robust method for the nonlinear complementarity problem. Finally, they present a small example that illustrates both the importance of using nonadditive costs and the effectiveness of the NE/SQP method.
Stability of Linear Equations--Algebraic Approach
ERIC Educational Resources Information Center
Cherif, Chokri; Goldstein, Avraham; Prado, Lucio M. G.
2012-01-01
This article could be of interest to teachers of applied mathematics as well as to people who are interested in applications of linear algebra. We give a comprehensive study of linear systems from an application point of view. Specifically, we give an overview of linear systems and problems that can occur with the computed solution when the…
NASA Astrophysics Data System (ADS)
Yamasaki, Tadashi; Houseman, Gregory; Hamling, Ian; Postek, Elek
2010-05-01
We have developed a new parallelized 3-D numerical code, OREGANO_VE, for the solution of the general visco-elastic problem in a rectangular block domain. The mechanical equilibrium equation is solved using the finite element method for a (non-)linear Maxwell visco-elastic rheology. Time-dependent displacement and/or traction boundary conditions can be applied. Matrix assembly is based on a tetrahedral element defined by 4 vertex nodes and 6 nodes located at the midpoints of the edges, and within which displacement is described by a quadratic interpolation function. For evaluating viscoelastic relaxation, an explicit time-stepping algorithm (Zienkiewicz and Cormeau, Int. J. Num. Meth. Eng., 8, 821-845, 1974) is employed. We test the accurate implementation of the OREGANO_VE by comparing numerical and analytic (or semi-analytic half-space) solutions to different problems in a range of applications: (1) equilibration of stress in a constant density layer after gravity is switched on at t = 0 tests the implementation of spatially variable viscosity and non-Newtonian viscosity; (2) displacement of the welded interface between two blocks of differing viscosity tests the implementation of viscosity discontinuities, (3) displacement of the upper surface of a layer under applied normal load tests the implementation of time-dependent surface tractions (4) visco-elastic response to dyke intrusion (compared with the solution in a half-space) tests the implementation of all aspects. In each case, the accuracy of the code is validated subject to use of a sufficiently small time step, providing assurance that the OREGANO_VE code can be applied to a range of visco-elastic relaxation processes in three dimensions, including post-seismic deformation and post-glacial uplift. The OREGANO_VE code includes a capability for representation of prescribed fault slip on an internal fault. The surface displacement associated with large earthquakes can be detected by some geodetic observations
Hlaing, Lwin Mar; Fahmida, Umi; Htet, Min Kyaw; Utomo, Budi; Firmansyah, Agus; Ferguson, Elaine L
2016-07-01
Poor feeding practices result in inadequate nutrient intakes in young children in developing countries. To improve practices, local food-based complementary feeding recommendations (CFR) are needed. This cross-sectional survey aimed to describe current food consumption patterns of 12-23-month-old Myanmar children (n 106) from Ayeyarwady region in order to identify nutrient requirements that are difficult to achieve using local foods and to formulate affordable and realistic CFR to improve dietary adequacy. Weekly food consumption patterns were assessed using a 12-h weighed dietary record, single 24-h recall and a 5-d food record. Food costs were estimated by market surveys. CFR were formulated by linear programming analysis using WHO Optifood software and evaluated among mothers (n 20) using trial of improved practices (TIP). Findings showed that Ca, Zn, niacin, folate and Fe were 'problem nutrients': nutrients that did not achieve 100 % recommended nutrient intake even when the diet was optimised. Chicken liver, anchovy and roselle leaves were locally available nutrient-dense foods that would fill these nutrient gaps. The final set of six CFR would ensure dietary adequacy for five of twelve nutrients at a minimal cost of 271 kyats/d (based on the exchange rate of 900 kyats/USD at the time of data collection: 3rd quarter of 2012), but inadequacies remained for niacin, folate, thiamin, Fe, Zn, Ca and vitamin B6. TIP showed that mothers believed liver and vegetables would cause worms and diarrhoea, but these beliefs could be overcome to successfully promote liver consumption. Therefore, an acceptable set of CFR were developed to improve the dietary practices of 12-23-month-old Myanmar children using locally available foods. Alternative interventions such as fortification, however, are still needed to ensure dietary adequacy of all nutrients.
Preconditioned quantum linear system algorithm.
Clader, B D; Jacobs, B C; Sprouse, C R
2013-06-21
We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm.
Design of Linear Quadratic Regulators and Kalman Filters
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Geyser, L.
1986-01-01
AESOP solves problems associated with design of controls and state estimators for linear time-invariant systems. Systems considered are modeled in state-variable form by set of linear differential and algebraic equations with constant coefficients. Two key problems solved by AESOP are linear quadratic regulator (LQR) design problem and steady-state Kalman filter design problem. AESOP is interactive. User solves design problems and analyzes solutions in single interactive session. Both numerical and graphical information available to user during the session.
Polonelli, Luciano; Pontón, José; Elguezabal, Natalia; Moragues, María Dolores; Casoli, Claudio; Pilotti, Elisabetta; Ronzi, Paola; Dobroff, Andrey S.; Rodrigues, Elaine G.; Juliano, Maria A.; Maffei, Domenico Leonardo; Magliani, Walter; Conti, Stefania; Travassos, Luiz R.
2008-01-01
Background Complementarity-determining regions (CDRs) are immunoglobulin (Ig) hypervariable domains that determine specific antibody (Ab) binding. We have shown that synthetic CDR-related peptides and many decapeptides spanning the variable region of a recombinant yeast killer toxin-like antiidiotypic Ab are candidacidal in vitro. An alanine-substituted decapeptide from the variable region of this Ab displayed increased cytotoxicity in vitro and/or therapeutic effects in vivo against various bacteria, fungi, protozoa and viruses. The possibility that isolated CDRs, represented by short synthetic peptides, may display antimicrobial, antiviral and antitumor activities irrespective of Ab specificity for a given antigen is addressed here. Methodology/Principal Findings CDR-based synthetic peptides of murine and human monoclonal Abs directed to: a) a protein epitope of Candida albicans cell wall stress mannoprotein; b) a synthetic peptide containing well-characterized B-cell and T-cell epitopes; c) a carbohydrate blood group A substance, showed differential inhibitory activities in vitro, ex vivo and/or in vivo against C. albicans, HIV-1 and B16F10-Nex2 melanoma cells, conceivably involving different mechanisms of action. Antitumor activities involved peptide-induced caspase-dependent apoptosis. Engineered peptides, obtained by alanine substitution of Ig CDR sequences, and used as surrogates of natural point mutations, showed further differential increased/unaltered/decreased antimicrobial, antiviral and/or antitumor activities. The inhibitory effects observed were largely independent of the specificity of the native Ab and involved chiefly germline encoded CDR1 and CDR2 of light and heavy chains. Conclusions/Significance The high frequency of bioactive peptides based on CDRs suggests that Ig molecules are sources of an unlimited number of sequences potentially active against infectious agents and tumor cells. The easy production and low cost of small sized synthetic
Arrenberg, Sebastian; et al.,
2013-10-31
In this Report we discuss the four complementary searches for the identity of dark matter: direct detection experiments that look for dark matter interacting in the lab, indirect detection experiments that connect lab signals to dark matter in our own and other galaxies, collider experiments that elucidate the particle properties of dark matter, and astrophysical probes sensitive to non-gravitational interactions of dark matter. The complementarity among the different dark matter searches is discussed qualitatively and illustrated quantitatively in several theoretical scenarios. Our primary conclusion is that the diversity of possible dark matter candidates requires a balanced program based on all four of those approaches.
The Black Hole Information Problem
NASA Astrophysics Data System (ADS)
Polchinski, Joseph
The black hole information problem has been a challenge since Hawking's original 1975 paper. It led to the discovery of AdS/CFT, which gave a partial resolution of the paradox. However, recent developments, in particular the firewall puzzle, show that there is much that we do not understand. I review the black hole, Hawking radiation, and the Page curve, and the classic form of the paradox. I discuss AdS/CFT as a partial resolution. I then discuss black hole complementarity and its limitations, leading to many proposals for different kinds of `drama.' I conclude with some recent ideas. Presented at the 2014-15 Jerusalem Winter School and the 2015 TASI.
Portfolio optimization using fuzzy linear programming
NASA Astrophysics Data System (ADS)
Pandit, Purnima K.
2013-09-01
Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.
NASA Technical Reports Server (NTRS)
Weber, Arthur L.
1989-01-01
Glyceraldehyde-3-phosphate acts as the substrate in a model of early self-replication of a phosphodiester copolymer of glycerate-3-phosphate and glycerol-3-phosphate. This model of self-replication is based on covalent complementarity in which information transfer is mediated by a single covalent bond, in contrast to multiple weak interactions that establish complementarity in nucleic acid replication. This replication model is connected to contemporary biochemistry through its use of glyceraldehyde-3-phosphate, a central metabolite of glycolysis and photosynthesis.
NASA Astrophysics Data System (ADS)
Revenough, Justin
Elastic waves propagating in simple media manifest a surprisingly rich collection of phenomena. Although some can't withstand the complexities of Earth's structure, the majority only grow more interesting and more important as remote sensing probes for seismologists studying the planet's interior. To fully mine the information carried to the surface by seismic waves, seismologists must produce accurate models of the waves. Great strides have been made in this regard. Problems that were entirely intractable a decade ago are now routinely solved on inexpensive workstations. The mathematical representations of waves coded into algorithms have grown vastly more sophisticated and are troubled by many fewer approximations, enforced symmetries, and limitations. They are far from straightforward, and seismologists using them need a firm grasp on wave propagation in simple media. Linear Elastic Waves, by applied mathematician John G. Harris, responds to this need.
NASA Astrophysics Data System (ADS)
Birx, Daniel
1992-03-01
Among the family of particle accelerators, the Induction Linear Accelerator is the best suited for the acceleration of high current electron beams. Because the electromagnetic radiation used to accelerate the electron beam is not stored in the cavities but is supplied by transmission lines during the beam pulse it is possible to utilize very low Q (typically<10) structures and very large beam pipes. This combination increases the beam breakup limited maximum currents to of order kiloamperes. The micropulse lengths of these machines are measured in 10's of nanoseconds and duty factors as high as 10-4 have been achieved. Until recently the major problem with these machines has been associated with the pulse power drive. Beam currents of kiloamperes and accelerating potentials of megavolts require peak power drives of gigawatts since no energy is stored in the structure. The marriage of liner accelerator technology and nonlinear magnetic compressors has produced some unique capabilities. It now appears possible to produce electron beams with average currents measured in amperes, peak currents in kiloamperes and gradients exceeding 1 MeV/meter, with power efficiencies approaching 50%. The nonlinear magnetic compression technology has replaced the spark gap drivers used on earlier accelerators with state-of-the-art all-solid-state SCR commutated compression chains. The reliability of these machines is now approaching 1010 shot MTBF. In the following paper we will briefly review the historical development of induction linear accelerators and then discuss the design considerations.
Onana, Vincent-de-Paul; Trouvé, Emmanuel; Mauris, Gilles; Rudant, Jean-Paul; Tonyé, Emmanuel
2004-01-10
A new linear-features detection method is proposed for extracting straight edges and lines in synthetic-aperture radar images. This method is based on the localized Radon transform, which produces geometrical integrals along straight lines. In the transformed domain, linear features have a specific signature: They appear as strongly contrasted structures, which are easier to extract with the conventional ratio edge detector. The proposed method is dedicated to applications such as geographical map updating for which prior information (approximate length and orientation of features) is available. Experimental results show the method's robustness with respect to poor radiometric contrast and hidden parts and its complementarity to conventional pixel-by-pixel approaches.
The generalized pole assignment problem. [dynamic output feedback problems
NASA Technical Reports Server (NTRS)
Djaferis, T. E.; Mitter, S. K.
1979-01-01
Two dynamic output feedback problems for a linear, strictly proper system are considered, along with their interrelationships. The problems are formulated in the frequency domain and investigated in terms of linear equations over rings of polynomials. Necessary and sufficient conditions are expressed using genericity.
A neural network for bounded linear programming
Culioli, J.C.; Protopopescu, V.; Britton, C.; Ericson, N. )
1989-01-01
The purpose of this paper is to describe a neural network implementation of an algorithm recently designed at ORNL to solve the Transportation and the Assignment Problems, and, more generally, any explicitly bounded linear program. 9 refs.
ERIC Educational Resources Information Center
Demana, Franklin; Waits, Bert K.
1993-01-01
Discusses solutions to real-world linear particle-motion problems using graphing calculators to simulate the motion and traditional analytic methods of calculus. Applications include (1) changing circular or curvilinear motion into linear motion and (2) linear particle accelerators in physics. (MDH)
Evolving evolutionary algorithms using linear genetic programming.
Oltean, Mihai
2005-01-01
A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chromosome encodes an EA which is used for solving a particular problem. Several Evolutionary Algorithms for function optimization, the Traveling Salesman Problem and the Quadratic Assignment Problem are evolved by using the considered model. Numerical experiments show that the evolved Evolutionary Algorithms perform similarly and sometimes even better than standard approaches for several well-known benchmarking problems.
Randen, I; Potter, K N; Li, Y; Thompson, K M; Pascual, V; Førre, O; Natvig, J B; Capra, J D
1993-10-01
Staphylococcal protein A (SPA) has two distinct binding sites on human immunoglobulins. In addition to binding to the Fc region of most IgG molecules, an "alternative" binding site has been localized to the Fab region of human immunoglobulins encoded by heavy chain variable gene segments belonging to the VHIII family. Comparison of amino acid sequences of closely related SPA-binding and -non-binding proteins suggested that VHIII-specific residues in the second complementarity-determining region (CDR2) were likely responsible for SPA binding activity. Site-directed mutagenesis of a single amino acid residue in CDR2 converted an IgM rheumatoid factor which did not bind SPA to an SPA binder. These findings, therefore, locate a critical site involved in SPA binding to the CDR2 of human immunoglobulins encoded by VHIII family gene segments.
Raaphorst, F M; Tami, J; Sanz, I E
1996-01-01
Methods have been developed to rapidly visualize the size distribution of third complementarity-determining regions (CDR3) in immunoglobulin (Ig) and T-cell receptor (TCR) molecules. DNA fragments spanning the Ig or TCR CDR3 are generated by PCR using primers at fixed positions in the variable and constant segments. These fragments differ in length due to size variation of the CDR3s. Visualization of the amplification products in polyacrylamide gels as a "CDR3 fingerprint profile" is a rough measure for the complexity of the Ig and TCR antigen-binding specificities. We report an adaptation of this method for the analysis of human Ig heavy-chain genes that incorporates silver staining, which allows for the fine analysis of specific regions of the profiles. This is especially useful for the study of low-abundant transcripts.
Systems of Inhomogeneous Linear Equations
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.
Some Topics in Linear Estimation,
1981-01-01
outstanding symposium. Iz SOME TOPICS IN LINEAR ESTIMATION 309 TABILE OF CONTENTS 1. The Integral Equations of Smoothing and Filtering la. The Smoothing ... Smoothing and Filtering 2. Some Examples - Stationary Processes 2a. Scalar Stationary Processes over Infinite Intervals 2b. Finite Intervals - The...Stationary 4. A Concluding Remark 310 T. KAILATH 1. The Integral Equations of Smoothing and Filtering Our estimation problems will be discussed in the context
ERIC Educational Resources Information Center
Kinsella, John J.
1970-01-01
Discussed are the nature of a mathematical problem, problem solving in the traditional and modern mathematics programs, problem solving and psychology, research related to problem solving, and teaching problem solving in algebra and geometry. (CT)
NASA Astrophysics Data System (ADS)
Young, T.
This book is intended to be used as a textbook in a one-semester course at a variety of levels. Because of self-study features incorporated, it may also be used by practicing electronic engineers as a formal and thorough introduction to the subject. The distinction between linear and digital integrated circuits is discussed, taking into account digital and linear signal characteristics, linear and digital integrated circuit characteristics, the definitions for linear and digital circuits, applications of digital and linear integrated circuits, aspects of fabrication, packaging, and classification and numbering. Operational amplifiers are considered along with linear integrated circuit (LIC) power requirements and power supplies, voltage and current regulators, linear amplifiers, linear integrated circuit oscillators, wave-shaping circuits, active filters, DA and AD converters, demodulators, comparators, instrument amplifiers, current difference amplifiers, analog circuits and devices, and aspects of troubleshooting.
... equipment? How is safety ensured? What is this equipment used for? A linear accelerator (LINAC) is the ... Therapy (SBRT) . top of page How does the equipment work? The linear accelerator uses microwave technology (similar ...
NASA Astrophysics Data System (ADS)
Hilbert, Bryan
2012-10-01
These observations will be used to monitor the signal non-linearity of the IR channel, as well as to update the IR channel non-linearity calibration reference file. The non-linearity behavior of each pixel in the detector will be investigated through the use of full frame and subarray flat fields, while the photometric behavior of point sources will be studied using observations of 47 Tuc. This is a continuation of the Cycle 19 non-linearity monitor, program 12696.
NASA Astrophysics Data System (ADS)
Hilbert, Bryan
2013-10-01
These observations will be used to monitor the signal non-linearity of the IR channel, as well as to update the IR channel non-linearity calibration reference file. The non-linearity behavior of each pixel in the detector will be investigated through the use of full frame and subarray flat fields, while the photometric behavior of point sources will be studied using observations of 47 Tuc. This is a continuation of the Cycle 20 non-linearity monitor, program 13079.
Generalised Assignment Matrix Methodology in Linear Programming
ERIC Educational Resources Information Center
Jerome, Lawrence
2012-01-01
Discrete Mathematics instructors and students have long been struggling with various labelling and scanning algorithms for solving many important problems. This paper shows how to solve a wide variety of Discrete Mathematics and OR problems using assignment matrices and linear programming, specifically using Excel Solvers although the same…
Linear stochastic optimal control and estimation
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, F. K. B.
1976-01-01
Digital program has been written to solve the LSOCE problem by using a time-domain formulation. LSOCE problem is defined as that of designing controls for linear time-invariant system which is disturbed by white noise in such a way as to minimize quadratic performance index.
A Linear Algebraic Approach to Teaching Interpolation
ERIC Educational Resources Information Center
Tassa, Tamir
2007-01-01
A novel approach for teaching interpolation in the introductory course in numerical analysis is presented. The interpolation problem is viewed as a problem in linear algebra, whence the various forms of interpolating polynomial are seen as different choices of a basis to the subspace of polynomials of the corresponding degree. This approach…
Singh, Mangal; Awasthi, Ashutosh; Soni, Sumit K; Singh, Rakshapal; Verma, Rajesh K; Kalra, Alok
2015-10-27
An assessment of roles of rhizospheric microbial diversity in plant growth is helpful in understanding plant-microbe interactions. Using random combinations of rhizospheric bacterial species at different richness levels, we analysed the contribution of species richness, compositions, interactions and identity on soil microbial respiration and plant biomass. We showed that bacterial inoculation in plant rhizosphere enhanced microbial respiration and plant biomass with complementary relationships among bacterial species. Plant growth was found to increase linearly with inoculation of rhizospheric bacterial communities with increasing levels of species or plant growth promoting trait diversity. However, inoculation of diverse bacterial communities having single plant growth promoting trait, i.e., nitrogen fixation could not enhance plant growth over inoculation of single bacteria. Our results indicate that bacterial diversity in rhizosphere affect ecosystem functioning through complementary relationship among plant growth promoting traits and may play significant roles in delivering microbial services to plants.
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Optimal design of linear and non-linear dynamic vibration absorbers
NASA Astrophysics Data System (ADS)
Jordanov, I. N.; Cheshankov, B. I.
1988-05-01
An efficient numerical method is applied to obtain optimal parameters for both linear and non-linear damped dynamic vibration absorbers. The minimization of the vibration response has been carried out for damped as well as undamped force excited primary systems with linear and non-linear spring characteristics. Comparison is made with the optimum absorber parameters that are determined by using Den Hartog's classical results in the linear case. Six optimization criteria by which the response is minimized over narrow and broad frequency bands are examined. Pareto optimal solutions of the multi-objective decision making problem are obtained.
An Intuitive Approach in Teaching Linear Programming in High School.
ERIC Educational Resources Information Center
Ulep, Soledad A.
1990-01-01
Discusses solving inequality problems involving linear programing. Describes the usual and alternative approaches. Presents an intuitive approach for finding a feasible solution by maximizing the objective function. (YP)
Linear elastic fracture mechanics primer
NASA Technical Reports Server (NTRS)
Wilson, Christopher D.
1992-01-01
This primer is intended to remove the blackbox perception of fracture mechanics computer software by structural engineers. The fundamental concepts of linear elastic fracture mechanics are presented with emphasis on the practical application of fracture mechanics to real problems. Numerous rules of thumb are provided. Recommended texts for additional reading, and a discussion of the significance of fracture mechanics in structural design are given. Griffith's criterion for crack extension, Irwin's elastic stress field near the crack tip, and the influence of small-scale plasticity are discussed. Common stress intensities factor solutions and methods for determining them are included. Fracture toughness and subcritical crack growth are discussed. The application of fracture mechanics to damage tolerance and fracture control is discussed. Several example problems and a practice set of problems are given.
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
Enhanced studies on a composite time integration scheme in linear and non-linear dynamics
NASA Astrophysics Data System (ADS)
Klarmann, S.; Wagner, W.
2015-03-01
In Bathe and Baig (Comput Struct 83:2513-2524, 2005), Bathe (Comput Struct 85:437-445, 2007), Bathe and Noh (Comput Struct 98-99:1-6, 2012) Bathe et al. have proposed a composite implicit time integration scheme for non-linear dynamic problems. This paper is aimed at the further investigation of the scheme's behaviour for use in case of linear and non-linear problems. Therefore, the examination of the amplification matrix of the scheme will be extended in order to get in addition the properties for linear calculations. Besides, it will be demonstrated that the integration scheme also has an impact on some of these properties when used for non-linear calculations. In conclusion, a recommendation for the only selectable parameter of the scheme will be given for application in case of geometrically non-linear calculations.
... often, it could be a sign of a balance problem. Balance problems can make you feel unsteady or as ... fall-related injuries, such as hip fracture. Some balance problems are due to problems in the inner ...
Lorentz Invariance Violation: the Latest Fermi Results and the GRB-AGN Complementarity
NASA Technical Reports Server (NTRS)
Bolmont, J.; Vasileiou, V.; Jacholkowska, A.; Piron, F.; Couturier, C.; Granot, J.; Stecker, F. W.; Cohen-Tanugi, J.; Longo, F.
2013-01-01
Because they are bright and distant, Gamma-ray Bursts (GRBs) have been used for more than a decade to test propagation of photons and to constrain relevant Quantum Gravity (QG) models in which the velocity of photons in vacuum can depend on their energy. With its unprecedented sensitivity and energy coverage, the Fermi satellite has provided the most constraining results on the QG energy scale so far. In this talk, the latest results obtained from the analysis of four bright GRBs observed by the Large Area Telescope will be reviewed. These robust results, cross-checked using three different analysis techniques set the limit on QG energy scale at E(sub QG,1) greater than 7.6 times the Planck energy for linear dispersion and E(sub QG,2) greater than 1.3 x 10(exp 11) gigaelectron volts for quadratic dispersion (95% CL). After describing the data and the analysis techniques in use, results will be discussed and confronted to latest constraints obtained with Active Galactic Nuclei.
ERIC Educational Resources Information Center
Ker, H. W.
2014-01-01
Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…
Numerical methods for control optimization in linear systems
NASA Astrophysics Data System (ADS)
Tyatyushkin, A. I.
2015-05-01
Numerical methods are considered for solving optimal control problems in linear systems, namely, terminal control problems with control and phase constraints and time-optimal control problems. Several algorithms with various computer storage requirements are proposed for solving these problems. The algorithms are intended for finding an optimal control in linear systems having certain features, for example, when the reachable set of a system has flat faces.
Patel, Chetan N; Bauer, Scott P; Davies, Julian; Durbin, Jim D; Shiyanova, Tatiyana L; Zhang, Kai; Tang, Jason X
2016-02-01
Aspartate (Asp) isomerization is a common degradation pathway and a potential critical quality attribute that needs to be well characterized during the optimization and development of therapeutic antibodies. A putative Asp-serine (Ser) isomerization motif was identified in the complementarity-determining region of a humanized monoclonal antibody and shown to be a developability risk using accelerated stability analyses. To address this issue, we explored different antibody engineering strategies. Direct engineering of the Asp residue resulted in a greater than 5× loss of antigen-binding affinity and bioactivity, indicating a critical role for this residue. In contrast, rational engineering of the Ser residue at the n+1 position had a negligible impact on antigen binding affinity and bioactivity compared with the parent molecule. Furthermore, the n+1 engineering strategy effectively eliminated Asp isomerization as determined by accelerated stability analysis. This outcome affirms that the rate of Asp isomerization is strongly dependent on the identity of the n+1 residue. This report highlights a systematic antibody engineering strategy for mitigating an Asp isomerization developability risk during lead optimization.
Liu, Bitao; Li, Hongbo; Zhu, Biao; Koide, Roger T; Eissenstat, David M; Guo, Dali
2015-10-01
In most cases, both roots and mycorrhizal fungi are needed for plant nutrient foraging. Frequently, the colonization of roots by arbuscular mycorrhizal (AM) fungi seems to be greater in species with thick and sparsely branched roots than in species with thin and densely branched roots. Yet, whether a complementarity exists between roots and mycorrhizal fungi across these two types of root system remains unclear. We measured traits related to nutrient foraging (root morphology, architecture and proliferation, AM colonization and extramatrical hyphal length) across 14 coexisting AM subtropical tree species following root pruning and nutrient addition treatments. After root pruning, species with thinner roots showed more root growth, but lower mycorrhizal colonization, than species with thicker roots. Under multi-nutrient (NPK) addition, root growth increased, but mycorrhizal colonization decreased significantly, whereas no significant changes were found under nitrogen or phosphate additions. Moreover, root length proliferation was mainly achieved by altering root architecture, but not root morphology. Thin-root species seem to forage nutrients mainly via roots, whereas thick-root species rely more on mycorrhizal fungi. In addition, the reliance on mycorrhizal fungi was reduced by nutrient additions across all species. These findings highlight complementary strategies for nutrient foraging across coexisting species with contrasting root traits.
Broodman, Ingrid; de Costa, Dominique; Stingl, Christoph; Dekker, Lennard J M; VanDuijn, Martijn M; Lindemans, Jan; van Klaveren, Rob J; Luider, Theo M
2012-01-01
Sera from lung cancer patients contain antibodies against tumor-associated antigens. Specific amino acid sequences of the complementarity-determining regions (CDRs) in the antigen-binding fragment (Fab) of these antibodies have potential as lung cancer biomarkers. Detection and identification of CDRs by mass spectrometry can significantly be improved by reduction of the complexity of the immunoglobulin molecule. Our aim was to molecular dissect IgG into κ and λ fragments to reduce the complexity and thereby identify substantially more CDRs than by just total Fab isolation. We purified Fab, Fab-κ, Fab-λ, κ and λ light chains from serum from 10 stage I lung adenocarcinoma patients and 10 matched controls from the current and former smokers. After purification, the immunoglobulin fragments were enzymatically digested and measured by high-resolution mass spectrometry. Finally, we compared the number of CDRs identified in these immunoglobulin fragments with that in the Fab fragments. Twice as many CDRs were identified when Fab-κ, Fab-λ, κ and λ (3330) were combined than in the Fab fraction (1663) alone. The number of CDRs and κ:λ ratio was statistically similar in both cases and controls. Molecular dissection of IgG identifies significantly more CDRs, which increases the likelihood of finding lung cancer-related CDR sequences.
Wang, Zhun; Zhang, Tie; Hu, Hongbo; Zhang, Huiyuan; Yang, Zhi; Cui, Lianxian; He, Wei
2008-12-18
Human Vdelta2 gammadelta T lymphocytes killed multiple solid tumors, even displaying comparable therapeutic efficacy with anti-tumor chemical-cis-platinum in an adoptive experiment in both nude and SCID murine model shown in present study. We previously found that T cell receptor (TCR) gammadelta recognize tumors via complementarity-determining region 3 (CDR3), briefly named as CDR3delta. Based on characteristics of specific binding of CDR3delta to tumor targets, we developed a novel tumor-targeting antibody, whose CDR3 in heavy chain is replaced by CDR3delta sequence derived from human ovarian carcinoma (OEC) infiltrating gammadelta T cells (gammadeltaTILs). This CDR3delta-grafted antibody OT3 exhibited specific binding activities to OEC line SKOV3 both in vitro and in vivo, which included specific binding to several tumor cell lines, interacting with heat shock protein (HSP) 60 and triggering ADCC against tumors in vitro, as well as displaying tumor imaging by radioisotope 99mTc-labeled antibody OT3 in vivo. Moreover, immunotoxin OT3-DT, CDR3delta-grafted antibody OT3 chemically conjugated with diphtheria toxin (DT) showed the anti-tumor effect on the growth of several solid tumors including OEC, cervix adenocarcinoma, hepatocellular carcinoma, and rectum adenocarcinoma to various extents in nude mice. Therefore, we have found and confirmed a novel therapeutic strategy for targeting solid tumors, making use of immune recognition characteristics of gammadelta T cells.
Mehra, J.
1987-05-01
In this paper, the main outlines of the discussions between Niels Bohr with Albert Einstein, Werner Heisenberg, and Erwin Schroedinger during 1920-1927 are treated. From the formulation of quantum mechanics in 1925-1926 and wave mechanics in 1926, there emerged Born's statistical interpretation of the wave function in summer 1926, and on the basis of the quantum mechanical transformation theory - formulated in fall 1926 by Dirac, London, and Jordan - Heisenberg formulated the uncertainty principle in early 1927. At the Volta Conference in Como in September 1927 and at the fifth Solvay Conference in Brussels the following month, Bohr publicly enunciated his complementarity principle, which had been developing in his mind for several years. The Bohr-Einstein discussions about the consistency and completeness of quantum mechanics and of physical theory as such - formally begun in October 1927 at the fifth Solvay Conference and carried on at the sixth Solvay Conference in October 1930 - were continued during the next decades. All these aspects are briefly summarized.
Stanfield, Robyn L.; Wilson, Ian A.; Smider, Vaughn V.
2016-01-01
A subset of bovine antibodies have an exceptionally long third heavy-chain complementarity determining region (CDR H3) that is highly variable in sequence and includes multiple cysteines. These long CDR H3s (up to 69 residues) fold into a long stalk atop which sits a knob domain that is located far from the antibody surface. Three new bovine Fab crystal structures have been determined to decipher the conserved and variable features of ultralong CDR H3s that lead to diversity in antigen recognition. Despite high sequence variability, the stalks adopt a conserved β-ribbon structure, while the knob regions share a conserved β-sheet that serves as a scaffold for two connecting loops of variable length and conformation, as well as one conserved disulfide. Variation in patterns and connectivity of the remaining disulfides contribute to the knob structural diversity. The unusual architecture of these ultralong bovine CDR H3s for generating diversity is unique in adaptive immune systems. PMID:27574710
Martini, Federico; Paglia, Maria Grazia; Montesano, Carla; Enders, Patrick J.; Gentile, Marco; Pauza, C. David; Gioia, Cristiana; Colizzi, Vittorio; Narciso, Pasquale; Pucillo, Leopoldo Paolo; Poccia, Fabrizio
2003-01-01
Vγ9Vδ2 T lymphocytes strongly respond to phosphoantigens from Plasmodium parasites. Thus, we analyzed the changes in Vγ9Vδ2 T-cell function and repertoire during the paroxysm phase of nonendemic malaria infection. During malaria paroxysm, Vγ9Vδ2 T cells were early activated but rapidly became anergic and finally loose Jγ1.2 Vγ9 complementarity-determining region 3 transcripts. PMID:12704176
NASA Technical Reports Server (NTRS)
Holloway, Sidney E., III (Inventor); Crossley, Edward A., Jr. (Inventor); Jones, Irby W. (Inventor); Miller, James B. (Inventor); Davis, C. Calvin (Inventor); Behun, Vaughn D. (Inventor); Goodrich, Lewis R., Sr. (Inventor)
1992-01-01
A linear mass actuator includes an upper housing and a lower housing connectable to each other and having a central passageway passing axially through a mass that is linearly movable in the central passageway. Rollers mounted in the upper and lower housings in frictional engagement with the mass translate the mass linearly in the central passageway and drive motors operatively coupled to the roller means, for rotating the rollers and driving the mass axially in the central passageway.
Linear Corrugating - Final Technical Report
Lloyd Chapman
2000-05-23
Linear Corrugating is a process for the manufacture of corrugated containers in which the flutes of the corrugated medium are oriented in the Machine Direction (MD) of the several layers of paper used. Conversely, in the conventional corrugating process the flutes are oriented at right angles to the MD in the Cross Machine Direction (CD). Paper is stronger in MD than in CD. Therefore, boxes made using the Linear Corrugating process are significantly stronger-in the prime strength criteria, Box Compression Test (BCT) than boxes made conventionally. This means that using Linear Corrugating boxes can be manufactured to BCT equaling conventional boxes but containing 30% less fiber. The corrugated container industry is a large part of the U.S. economy, producing over 40 million tons annually. For such a large industry, the potential savings of Linear Corrugating are enormous. The grant for this project covered three phases in the development of the Linear Corrugating process: (1) Production and evaluation of corrugated boxes on commercial equipment to verify that boxes so manufactured would have enhanced BCT as proposed in the application; (2) Production and evaluation of corrugated boxes made on laboratory equipment using combined board from (1) above but having dual manufactures joints (glue joints). This box manufacturing method (Dual Joint) is proposed to overcome box perimeter limitations of the Linear Corrugating process; (3) Design, Construction, Operation and Evaluation of an engineering prototype machine to form flutes in corrugating medium in the MD of the paper. This operation is the central requirement of the Linear Corrugating process. Items I and II were successfully completed, showing predicted BCT increases from the Linear Corrugated boxes and significant strength improvement in the Dual Joint boxes. The Former was constructed and operated successfully using kraft linerboard as the forming medium. It was found that tensile strength and stretch
Fault tolerant linear actuator
Tesar, Delbert
2004-09-14
In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.
Linear phase compressive filter
McEwan, Thomas E.
1995-01-01
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.
Linear phase compressive filter
McEwan, T.E.
1995-06-06
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.
Chen, Qingwen; Narayanan, Kumaran
2015-01-01
Recombineering is a powerful genetic engineering technique based on homologous recombination that can be used to accurately modify DNA independent of its sequence or size. One novel application of recombineering is the assembly of linear BACs in E. coli that can replicate autonomously as linear plasmids. A circular BAC is inserted with a short telomeric sequence from phage N15, which is subsequently cut and rejoined by the phage protelomerase enzyme to generate a linear BAC with terminal hairpin telomeres. Telomere-capped linear BACs are protected against exonuclease attack both in vitro and in vivo in E. coli cells and can replicate stably. Here we describe step-by-step protocols to linearize any BAC clone by recombineering, including inserting and screening for presence of the N15 telomeric sequence, linearizing BACs in vivo in E. coli, extracting linear BACs, and verifying the presence of hairpin telomere structures. Linear BACs may be useful for functional expression of genomic loci in cells, maintenance of linear viral genomes in their natural conformation, and for constructing innovative artificial chromosome structures for applications in mammalian and plant cells.
Character displacement and the evolution of niche complementarity in a model biofilm community.
Ellis, Crystal N; Traverse, Charles C; Mayo-Smith, Leslie; Buskirk, Sean W; Cooper, Vaughn S
2015-02-01
Colonization of vacant environments may catalyze adaptive diversification and be followed by competition within the nascent community. How these interactions ultimately stabilize and affect productivity are central problems in evolutionary ecology. Diversity can emerge by character displacement, in which selection favors phenotypes that exploit an alternative resource and reduce competition, or by facilitation, in which organisms change the environment and enable different genotypes or species to become established. We previously developed a model of long-term experimental evolution in which bacteria attach to a plastic bead, form a biofilm, and disperse to a new bead. Here, we focus on the evolution of coexisting mutants within a population of Burkholderia cenocepacia and how their interactions affected productivity. Adaptive mutants initially competed for space, but later competition declined, consistent with character displacement and the predicted effects of the evolved mutations. The community reached a stable equilibrium as each ecotype evolved to inhabit distinct, complementary regions of the biofilm. Interactions among ecotypes ultimately became facilitative and enhanced mixed productivity. Observing the succession of genotypes within niches illuminated changing selective forces within the community, including a fundamental role for genotypes producing small colony variants that underpin chronic infections caused by B. cenocepacia.
Finite Element Interface to Linear Solvers
Williams, Alan
2005-03-18
Sparse systems of linear equations arise in many engineering applications, including finite elements, finite volumes, and others. The solution of linear systems is often the most computationally intensive portion of the application. Depending on the complexity of problems addressed by the application, there may be no single solver capable of solving all of the linear systems that arise. This motivates the desire to switch an application from one solver librwy to another, depending on the problem being solved. The interfaces provided by solver libraries differ greatly, making it difficult to switch an application code from one library to another. The amount of library-specific code in an application Can be greatly reduced by having an abstraction layer between solver libraries and the application, putting a common "face" on various solver libraries. One such abstraction layer is the Finite Element Interface to Linear Solvers (EEl), which has seen significant use by finite element applications at Sandia National Laboratories and Lawrence Livermore National Laboratory.
NASA Astrophysics Data System (ADS)
Cacuci, Dan G.
2015-03-01
This work presents an illustrative application of the second-order adjoint sensitivity analysis methodology (2nd-ASAM) to a paradigm neutron diffusion problem, which is sufficiently simple to admit an exact solution, thereby making transparent the underlying mathematical derivations. The general theory underlying 2nd-ASAM indicates that, for a physical system comprising Nα parameters, the computation of all of the first- and second-order response sensitivities requires (per response) at most (2Nα + 1) "large-scale" computations using the first-level and, respectively, second-level adjoint sensitivity systems (1st-LASS and 2nd-LASS). Very importantly, however, the illustrative application presented in this work shows that the actual number of adjoint computations needed for computing all of the first- and second-order response sensitivities may be significantly less than (2Nα + 1) per response. For this illustrative problem, four "large-scale" adjoint computations sufficed for the complete and exact computations of all 4 first- and 10 distinct second-order derivatives. Furthermore, the construction and solution of the 2nd-LASS requires very little additional effort beyond the construction of the adjoint sensitivity system needed for computing the first-order sensitivities. Very significantly, only the sources on the right-sides of the diffusion (differential) operator needed to be modified; the left-side of the differential equations (and hence the "solver" in large-scale practical applications) remained unchanged. All of the first-order relative response sensitivities to the model parameters have significantly large values, of order unity. Also importantly, most of the second-order relative sensitivities are just as large, and some even up to twice as large as the first-order sensitivities. In the illustrative example presented in this work, the second-order sensitivities contribute little to the response variances and covariances. However, they have the
Solving a signalized traffic intersection problem with an hyperbolic penalty function
NASA Astrophysics Data System (ADS)
Melo, Teófilo; Monteiro, M. Teresa T.; Matias, João
2012-09-01
Mathematical Program with Complementarity Constraints (MPCC) finds many applications in fields such as engineering design, economic equilibrium and mathematical programming theory itself. A queueing system model resulting from a single signalized intersection regulated by pre-timed control in traffic network is considered. The model is formulated as an MPCC problem. A MATLAB implementation based on an hyperbolic penalty function is used to solve this practical problem, computing the total average waiting time of the vehicles in all queues and the green split allocation. The problem was codified in AMPL.
Richter, B.
1985-12-01
A report is given on the goals and progress of the SLAC Linear Collider. The status of the machine and the detectors are discussed and an overview is given of the physics which can be done at this new facility. Some ideas on how (and why) large linear colliders of the future should be built are given.
Linear Equations: Equivalence = Success
ERIC Educational Resources Information Center
Baratta, Wendy
2011-01-01
The ability to solve linear equations sets students up for success in many areas of mathematics and other disciplines requiring formula manipulations. There are many reasons why solving linear equations is a challenging skill for students to master. One major barrier for students is the inability to interpret the equals sign as anything other than…
Alfonso, R; Belinchon, I
2001-01-01
Linear eruptions are sometimes associated with systemic diseases and they may also be induced by various drugs. Paradoxically, such acquired inflammatory skin diseases tend to follow the system of Blaschko's lines. We describe a case of unilateral linear drug eruption caused by ibuprofen, which later became bilateral and generalized.
Linearization of Robot Manipulators
NASA Technical Reports Server (NTRS)
Kreutz, Kenneth
1987-01-01
Four nonlinear control schemes equivalent. Report discusses theory of nonlinear feedback control of robot manipulator, emphasis on control schemes making manipulator input and output behave like decoupled linear system. Approach, called "exact external linearization," contributes efforts to control end-effector trajectories, positions, and orientations.
THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)
This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...
Linear models: permutation methods
Cade, B.S.; Everitt, B.S.; Howell, D.C.
2005-01-01
Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...
NASA Technical Reports Server (NTRS)
Clancy, John P.
1988-01-01
The object of the invention is to provide a mechanical force actuator which is lightweight and manipulatable and utilizes linear motion for push or pull forces while maintaining a constant overall length. The mechanical force producing mechanism comprises a linear actuator mechanism and a linear motion shaft mounted parallel to one another. The linear motion shaft is connected to a stationary or fixed housing and to a movable housing where the movable housing is mechanically actuated through actuator mechanism by either manual means or motor means. The housings are adapted to releasably receive a variety of jaw or pulling elements adapted for clamping or prying action. The stationary housing is adapted to be pivotally mounted to permit an angular position of the housing to allow the tool to adapt to skewed interfaces. The actuator mechanisms is operated by a gear train to obtain linear motion of the actuator mechanism.
Is Africa a 'Graveyard' for Linear Accelerators?
Reichenvater, H; Matias, L Dos S
2016-12-01
Linear accelerator downtimes are common and problematic in many African countries and may jeopardise the outcome of affected radiation treatments. The predicted increase in cancer incidence and prevalence on the African continent will require, inter alia, improved response with regard to a reduction in linear accelerator downtimes. Here we discuss the problems associated with the maintenance and repair of linear accelerators and propose alternative solutions relevant for local conditions in African countries. The paper is based on about four decades of experience in capacity building, installing, commissioning, calibrating, servicing and repairing linear accelerators in Africa, where about 40% of the low and middle income countries in the world are geographically located. Linear accelerators can successfully be operated, maintained and repaired in African countries provided proper maintenance and repair plans are put in place and executed.
A Linear Bicharacteristic FDTD Method
NASA Technical Reports Server (NTRS)
Beggs, John H.
2001-01-01
The linear bicharacteristic scheme (LBS) was originally developed to improve unsteady solutions in computational acoustics and aeroacoustics [1]-[7]. It is a classical leapfrog algorithm, but is combined with upwind bias in the spatial derivatives. This approach preserves the time-reversibility of the leapfrog algorithm, which results in no dissipation, and it permits more flexibility by the ability to adopt a characteristic based method. The use of characteristic variables allows the LBS to treat the outer computational boundaries naturally using the exact compatibility equations. The LBS offers a central storage approach with lower dispersion than the Yee algorithm, plus it generalizes much easier to nonuniform grids. It has previously been applied to two and three-dimensional freespace electromagnetic propagation and scattering problems [3], [6], [7]. This paper extends the LBS to model lossy dielectric and magnetic materials. Results are presented for several one-dimensional model problems, and the FDTD algorithm is chosen as a convenient reference for comparison.
Gadgets, approximation, and linear programming
Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.
1996-12-31
We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.
... version of this page please turn Javascript on. Balance Problems About Balance Problems Have you ever felt dizzy, lightheaded, or ... dizziness problem during the past year. Why Good Balance is Important Having good balance means being able ...
Stochastic Optimal Control and Linear Programming Approach
Buckdahn, R.; Goreac, D.; Quincampoix, M.
2011-04-15
We study a classical stochastic optimal control problem with constraints and discounted payoff in an infinite horizon setting. The main result of the present paper lies in the fact that this optimal control problem is shown to have the same value as a linear optimization problem stated on some appropriate space of probability measures. This enables one to derive a dual formulation that appears to be strongly connected to the notion of (viscosity sub) solution to a suitable Hamilton-Jacobi-Bellman equation. We also discuss relation with long-time average problems.
A Class of FFT Based Algorithms for Linear Estimation.
1982-04-01
Decades of Linear Filtering Theory," IEEE Trans. Inform. Theory IT-20, Mar. 1974. [4] J. S. Meditch , "A Survey of Data Smoothing for Linear and...classical problems such as Linear Smoothing and Recursive Block Filtering problems can be solved exactly by some new nonrecursive algorithms which...CLASSIFICATION OF THIS PAGE(When DeE. Enterd) 20. AB3TRACT CONTINUED where data is received and smoothed recursively block by block. Real time batch
Linear ubiquitination in immunity.
Shimizu, Yutaka; Taraborrelli, Lucia; Walczak, Henning
2015-07-01
Linear ubiquitination is a post-translational protein modification recently discovered to be crucial for innate and adaptive immune signaling. The function of linear ubiquitin chains is regulated at multiple levels: generation, recognition, and removal. These chains are generated by the linear ubiquitin chain assembly complex (LUBAC), the only known ubiquitin E3 capable of forming the linear ubiquitin linkage de novo. LUBAC is not only relevant for activation of nuclear factor-κB (NF-κB) and mitogen-activated protein kinases (MAPKs) in various signaling pathways, but importantly, it also regulates cell death downstream of immune receptors capable of inducing this response. Recognition of the linear ubiquitin linkage is specifically mediated by certain ubiquitin receptors, which is crucial for translation into the intended signaling outputs. LUBAC deficiency results in attenuated gene activation and increased cell death, causing pathologic conditions in both, mice, and humans. Removal of ubiquitin chains is mediated by deubiquitinases (DUBs). Two of them, OTULIN and CYLD, are constitutively associated with LUBAC. Here, we review the current knowledge on linear ubiquitination in immune signaling pathways and the biochemical mechanisms as to how linear polyubiquitin exerts its functions distinctly from those of other ubiquitin linkage types.
Postma, Johannes A.; Lynch, Jonathan P.
2012-01-01
Background and Aims During their domestication, maize, bean and squash evolved in polycultures grown by small-scale farmers in the Americas. Polycultures often overyield on low-fertility soils, which are a primary production constraint in low-input agriculture. We hypothesized that root architectural differences among these crops causes niche complementarity and thereby greater nutrient acquisition than corresponding monocultures. Methods A functional–structural plant model, SimRoot, was used to simulate the first 40 d of growth of these crops in monoculture and polyculture and to determine the effects of root competition on nutrient uptake and biomass production of each plant on low-nitrogen, -phosphorus and -potassium soils. Key Results Squash, the earliest domesticated crop, was most sensitive to low soil fertility, while bean, the most recently domesticated crop, was least sensitive to low soil fertility. Nitrate uptake and biomass production were up to 7 % greater in the polycultures than in the monocultures, but only when root architecture was taken into account. Enhanced nitrogen capture in polycultures was independent of nitrogen fixation by bean. Root competition had negligible effects on phosphorus or potassium uptake or biomass production. Conclusions We conclude that spatial niche differentiation caused by differences in root architecture allows polycultures to overyield when plants are competing for mobile soil resources. However, direct competition for immobile resources might be negligible in agricultural systems. Interspecies root spacing may also be too large to allow maize to benefit from root exudates of bean or squash. Above-ground competition for light, however, may have strong feedbacks on root foraging for immobile nutrients, which may increase cereal growth more than it will decrease the growth of the other crops. We note that the order of domestication of crops correlates with increasing nutrient efficiency, rather than production
NASA Astrophysics Data System (ADS)
Thum, T.; Peylin, P.; Granier, A.; Ibrom, A.; Linden, L.; Loustau, D.; Bacour, C.; Ciais, P.
2010-12-01
Assimilation of data from several measurements provides knowledge of the model's performance and uncertainties. In this work we investigate the complementary of Biomass data to net CO2 flux (NEE) and latent heat flux (LE) in optimising parameters of the biogeochemical model ORCHIDEE. Our optimisation method is a gradient based iterative method. We optimized the model at the French forest sites, European beech forest of Hesse (48 .67°N, 7.06°E) and maritime pine forest of Le Bray (44.72°N, 0.77°W). First we adapted the model to represent the past clearcut on these two sites in order to obtain a realistic age of the forest. The model-data improvement in terms of aboveground biomass will be discussed. We then used FluxNet and Biomass data, separately and altogether, in the optimization process to assess the potential and the complementarities of these two data stream. For biomass data optimization we added parameters linked to allocation to the optimization scheme. The results show a decrease in the uncertainty of the parameters after optimization and reveal some structural deficiencies in the model. In a second step, data from ecosystem manipulation experiment site Brandbjerg (55.88°N, 11.97°E), a Danish grassland site, were used for model optimisation. The different ecosystem experiments at this site include rain exclusion, warming, and increased CO2 concentration, and only biomass data were available and used in the optimization for the different treatments. We investigate the ability of the model to represent the biomass differences between manipulative experiments with a given set of parameters and highlight model deficiencies.
1979-12-01
OPTIMAL LINEAR CONTROL C.A. HARVEY M.G. SAFO NOV G. STEIN J.C. DOYLE HONEYWELL SYSTEMS & RESEARCH CENTER j 2600 RIDGWAY PARKWAY j [ MINNEAPOLIS...RECIPIENT’S CAT ALC-’ W.IMIJUff’? * J~’ CR2 15-238-4F TP P EI)ŕll * (~ Optimal Linear Control ~iOGRPR UBA m a M.G Lnar o Con_ _ _ _ _ _ R PORT__ _ _ I RE...Characterizations of optimal linear controls have been derived, from which guides for selecting the structure of the control system and the weights in
NASA Technical Reports Server (NTRS)
Studer, P. A. (Inventor)
1983-01-01
A linear magnetic bearing system having electromagnetic vernier flux paths in shunt relation with permanent magnets, so that the vernier flux does not traverse the permanent magnet, is described. Novelty is believed to reside in providing a linear magnetic bearing having electromagnetic flux paths that bypass high reluctance permanent magnets. Particular novelty is believed to reside in providing a linear magnetic bearing with a pair of axially spaced elements having electromagnets for establishing vernier x and y axis control. The magnetic bearing system has possible use in connection with a long life reciprocating cryogenic refrigerator that may be used on the space shuttle.
Word Problems: A "Meme" for Our Times.
ERIC Educational Resources Information Center
Leamnson, Robert N.
1996-01-01
Discusses a novel approach to word problems that involves linear relationships between variables. Argues that working stepwise through intermediates is the way our minds actually work and therefore this should be used in solving word problems. (JRH)
NASA Technical Reports Server (NTRS)
Laughlin, Darren
1995-01-01
Inertial linear actuators developed to suppress residual accelerations of nominally stationary or steadily moving platforms. Function like long-stroke version of voice coil in conventional loudspeaker, with superimposed linear variable-differential transformer. Basic concept also applicable to suppression of vibrations of terrestrial platforms. For example, laboratory table equipped with such actuators plus suitable vibration sensors and control circuits made to vibrate much less in presence of seismic, vehicular, and other environmental vibrational disturbances.
NASA Technical Reports Server (NTRS)
Callier, Frank M.; Desoer, Charles A.
1991-01-01
The aim of this book is to provide a systematic and rigorous access to the main topics of linear state-space system theory in both the continuous-time case and the discrete-time case; and the I/O description of linear systems. The main thrusts of the work are the analysis of system descriptions and derivations of their properties, LQ-optimal control, state feedback and state estimation, and MIMO unity-feedback systems.
1985-04-01
for a Stirling cycle cryocooler . 26 * .*o .. * COMPRESSOR MOTOR FORCE VERSUS ROTOR AXIAL POSITION COMPRESSOR P-V DIAGRAM *COMPRESSOR MOTOR COMPRESSOR...potential. However, the limited test program has demonstrated the application of linear motor drive technology to a Stirling cycle cryocooler design. L...Ace-ss Ion& For flTIC TAB - TABLE OF CONTENTS TITLE IPAGE - 2. DETAILED DESIGN OF LINEAR RESONANCE CRYOCOOLER ......... 3 2.2 Expander
NASA Astrophysics Data System (ADS)
Morel, Danielle; Levy, William B.
2006-03-01
Information processing in the brain is metabolically expensive and energy usage by the different components of the nervous system is not well understood. In a continuing effort to explore the costs and constraints of information processing at the single neuron level, dendritic processes are being studied. More specifically, the role of various ion channel conductances is explored in terms of integrating dendritic excitatory synaptic input. Biophysical simulations of dendritic behavior show that the complexity of voltage-dependent, non-linear dendritic conductances can produce simplicity in the form of linear synaptic integration. Over increasing levels of synaptic activity, it is shown that two types of voltage-dependent conductances produce linearization over a limited range. This range is determined by the parameters defining the ion channel and the 'passive' properties of the dendrite. A persistent sodium and a transient A-type potassium channel were considered at steady-state transmembrane potentials in the vicinity of and hyperpolarized to the threshold for action potential initiation. The persistent sodium is seen to amplify and linearize the synaptic input over a short range of low synaptic activity. In contrast, the A-type potassium channel has a broader linearization range but tends to operate at higher levels of synaptic bombardment. Given equivalent 'passive' dendritic properties, the persistent sodium is found to be less costly than the A-type potassium in linearizing synaptic input.
2016-01-01
Epitope-based design of vaccines, immunotherapeutics, and immunodiagnostics is complicated by structural changes that radically alter immunological outcomes. This is obscured by expressing redundancy among linear-epitope data as fractional sequence-alignment identity, which fails to account for potentially drastic loss of binding affinity due to single-residue substitutions even where these might be considered conservative in the context of classical sequence analysis. From the perspective of immune function based on molecular recognition of epitopes, functional redundancy of epitope data (FRED) thus may be defined in a biologically more meaningful way based on residue-level physicochemical similarity in the context of antigenic cross-reaction, with functional similarity between epitopes expressed as the Shannon information entropy for differential epitope binding. Such similarity may be estimated in terms of structural differences between an immunogen epitope and an antigen epitope with reference to an idealized binding site of high complementarity to the immunogen epitope, by analogy between protein folding and ligand-receptor binding; but this underestimates potential for cross-reactivity, suggesting that epitope-binding site complementarity is typically suboptimal as regards immunologic specificity. The apparently suboptimal complementarity may reflect a tradeoff to attain optimal immune function that favors generation of immune-system components each having potential for cross-reactivity with a variety of epitopes. PMID:27274725
A linear combination of modified Bessel functions
NASA Technical Reports Server (NTRS)
Shitzer, A.; Chato, J. C.
1971-01-01
A linear combination of modified Bessel functions is defined, discussed briefly, and tabulated. This combination was found to recur in the analysis of various heat transfer problems and in the analysis of the thermal behavior of living tissue when modeled by cylindrical shells.
Recursive inversion of externally defined linear systems
NASA Technical Reports Server (NTRS)
Bach, Ralph E., Jr.; Baram, Yoram
1988-01-01
The approximate inversion of an internally unknown linear system, given by its impulse response sequence, by an inverse system having a finite impulse response, is considered. The recursive least squares procedure is shown to have an exact initialization, based on the triangular Toeplitz structure of the matrix involved. The proposed approach also suggests solutions to the problems of system identification and compensation.