Science.gov

Sample records for linear complementarity problem

  1. Polynomial interior-point algorithms for horizontal linear complementarity problem

    NASA Astrophysics Data System (ADS)

    Wang, G. Q.; Bai, Y. Q.

    2009-11-01

    In this paper a class of polynomial interior-point algorithms for horizontal linear complementarity problem based on a new parametric kernel function, with parameters p[set membership, variant][0,1] and [sigma]>=1, are presented. The proposed parametric kernel function is not exponentially convex and also not strongly convex like the usual kernel functions, and has a finite value at the boundary of the feasible region. It is used both for determining the search directions and for measuring the distance between the given iterate and the [mu]-center for the algorithm. The currently best known iteration bounds for the algorithm with large- and small-update methods are derived, namely, and , respectively, which reduce the gap between the practical behavior of the algorithms and their theoretical performance results. Numerical tests demonstrate the behavior of the algorithms for different results of the parameters p,[sigma] and [theta].

  2. The treatment of contact problems as a non-linear complementarity problem

    SciTech Connect

    Bjorkman, G.

    1994-12-31

    Contact and friction problems are of great importance in many engineering applications, for example in ball bearings, bolted joints, metal forming and also car crashes. In these problems the behavior on the contact surface has a great influence on the overall behavior of the structure. Often problems such as wear and initiation of cracks occur on the contact surface. Contact problems are often described using complementarity conditions, w {>=} 0, p {>=} 0, w{sup T}p = 0, which for example represents the following behavior: (i) two bodies can not penetrate each other, i.e. the gap must be greater than or equal to zero, (ii) the contact pressure is positive and different from zero only if the two bodies are in contact with each other. Here it is shown that by using the theory of non-linear complementarity problems the unilateral behavior of the problem can be treated in a straightforward way. It is shown how solution methods for discretized frictionless contact problem can be formulated. By formulating the problem either as a generalized equation or as a B-differentiable function, it is pointed out how Newton`s method may be extended to contact problems. Also an algorithm for tracing the equilibrium path of frictionless contact problems is described. It is shown that, in addition to the {open_quotes}classical{close_quotes} bifurcation and limit points, there can be points where the equilibrium path has reached an end point or points where bifurcation is possible even if the stiffness matrix is non-singular.

  3. New Existence Conditions for Order Complementarity Problems

    NASA Astrophysics Data System (ADS)

    Németh, S. Z.

    2009-09-01

    Complementarity problems are mathematical models of problems in economics, engineering and physics. A special class of complementarity problems are the order complementarity problems [2]. Order complementarity problems can be applied in lubrication theory [6] and economics [1]. The notion of exceptional family of elements for general order complementarity problems in Banach spaces will be introduced. It will be shown that for general order complementarity problems defined by completely continuous fields the problem has either a solution or an exceptional family of elements (for other notions of exceptional family of elements see [1, 2, 3, 4] and the related references therein). This solves a conjecture of [2] about the existence of exceptional family of elements for order complementarity problems. The proof can be done by using the Leray-Schauder alternative [5]. An application to integral operators will be given.

  4. A path-following interior-point algorithm for linear and quadratic problems

    SciTech Connect

    Wright, S.J.

    1993-12-01

    We describe an algorithm for the monotone linear complementarity problem that converges for many positive, not necessarily feasible, starting point and exhibits polynomial complexity if some additional assumptions are made on the starting point. If the problem has a strictly complementary solution, the method converges subquadratically. We show that the algorithm and its convergence extend readily to the mixed monotone linear complementarity problem and, hence, to all the usual formulations of the linear programming and convex quadratic programming problems.

  5. The fully actuated traffic control problem solved by global optimization and complementarity

    NASA Astrophysics Data System (ADS)

    Ribeiro, Isabel M.; de Lurdes de Oliveira Simões, Maria

    2016-02-01

    Global optimization and complementarity are used to determine the signal timing for fully actuated traffic control, regarding effective green and red times on each cycle. The average values of these parameters can be used to estimate the control delay of vehicles. In this article, a two-phase queuing system for a signalized intersection is outlined, based on the principle of minimization of the total waiting time for the vehicles. The underlying model results in a linear program with linear complementarity constraints, solved by a sequential complementarity algorithm. Departure rates of vehicles during green and yellow periods were treated as deterministic, while arrival rates of vehicles were assumed to follow a Poisson distribution. Several traffic scenarios were created and solved. The numerical results reveal that it is possible to use global optimization and complementarity over a reasonable number of cycles and determine with efficiency effective green and red times for a signalized intersection.

  6. Non-linear potential problems

    NASA Astrophysics Data System (ADS)

    Skerget, P.; Brebbia, C. A.

    In many practical applications of boundary elements, the potential problems may be nonlinear. The use of Kirchoff's transform provides an approach to convert a nonlinear material problem into a linear one. A description of several different shape functions to define the conductivity is presented. Attention is given to the type of integral equations which are obtained if the Kirchoff's transform is applied for nonlinear material in the presence of mixed boundary conditions. The integral formulation for nonlinear radiation boundary conditions with and without potential dependent conductivity is also considered. For steady heat conduction problems with constant conductivity a boundary integral equation relating boundary values for temperatures (or potentials) and its normal derivatives over the boundary can be obtained. Applications which concern the solution of steady state conduction problems are investigated. The problems are related to a hollow cylinder, a nuclear reactor pressure vessel, and an industrial furnace.

  7. Linearization problem in pseudolite surveys

    NASA Astrophysics Data System (ADS)

    Cellmer, Slawomir; Rapinski, Jacek

    2010-06-01

    GPS augmented with pseudolites (PL), can be used in various engineering surveys. Also pseudolite—only navigation system can be designed and used in any place, even if GPS signal is not available (Kee et al. Development of indoor navigation system using asynchronous pseudolites, 1038-1045, 2000). Especially in engineering surveys, where harsh survey environment is common, pseudolites have a lot of applications. Pseudolites may be used in construction sites, open pit mines, city canyons, GPS and PL baseline processing is similar, although there are few differences that must be taken into account. One of the major issues is linearization problem. The source of the problem is neglecting second terms of Taylor series expansion in GPS baseline processing software. This problem occurs when the pseudolite is relatively close to the receiver, which is the case in PL surveys. In this paper authors presents the algorithm for GPS + PL data processing including, neglected in classical GPS only approach, second terms of Taylor series expansion. The mathematical model of adjustment problem, detailed proposal of application in baseline processing algorithms, and numerical tests are presented.

  8. The linear separability problem: some testing methods.

    PubMed

    Elizondo, D

    2006-03-01

    The notion of linear separability is used widely in machine learning research. Learning algorithms that use this concept to learn include neural networks (single layer perceptron and recursive deterministic perceptron), and kernel machines (support vector machines). This paper presents an overview of several of the methods for testing linear separability between two classes. The methods are divided into four groups: Those based on linear programming, those based on computational geometry, one based on neural networks, and one based on quadratic programming. The Fisher linear discriminant method is also presented. A section on the quantification of the complexity of classification problems is included. PMID:16566462

  9. The Vertical Linear Fractional Initialization Problem

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Hartley, Tom T.

    1999-01-01

    This paper presents a solution to the initialization problem for a system of linear fractional-order differential equations. The scalar problem is considered first, and solutions are obtained both generally and for a specific initialization. Next the vector fractional order differential equation is considered. In this case, the solution is obtained in the form of matrix F-functions. Some control implications of the vector case are discussed. The suggested method of problem solution is shown via an example.

  10. Random generation of structured linear optimization problems

    SciTech Connect

    Arthur, J.; Frendewey, J. Jr.

    1994-12-31

    We describe the on-going development of a random generator for linear optimization problems (LPs) founded on the concept of block structure. The general LP: minimize z = cx subject to Ax = b, x {ge} 0 can take a variety of special forms determined (primarily) by predefined structures on the matrix A of constraint coefficients. The authors have developed several random problem generators which provide instances of LPs having such structure; in particular (i) general (non-structured) problems, (ii) generalized upper bound (GUB) constraints, (iii) minimum cost network flow problems, (iv) transportation and assignment problems, (v) shortest path problems, (vi) generalized network flow problems, and (vii) multicommodity network flow problems. This paper discusses the general philosophy behind the construction of these generators. In addition, the task of combining the generators into a single generator -- in which the matrix A can contain various blocks, each of a prescribed structure from those mentioned above -- is described.

  11. Piecewise linear approximation for hereditary control problems

    NASA Technical Reports Server (NTRS)

    Propst, Georg

    1990-01-01

    This paper presents finite-dimensional approximations for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems, when a quadratic cost integral must be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in the case where the cost integral ranges over a finite time interval, as well as in the case where it ranges over an infinite time interval. The arguments in the last case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense.

  12. Numerical linear algebra for reconstruction inverse problems

    NASA Astrophysics Data System (ADS)

    Nachaoui, Abdeljalil

    2004-01-01

    Our goal in this paper is to discuss various issues we have encountered in trying to find and implement efficient solvers for a boundary integral equation (BIE) formulation of an iterative method for solving a reconstruction problem. We survey some methods from numerical linear algebra, which are relevant for the solution of this class of inverse problems. We motivate the use of our constructing algorithm, discuss its implementation and mention the use of preconditioned Krylov methods.

  13. Linear stochastic optimal control and estimation problem

    NASA Technical Reports Server (NTRS)

    Geyser, L. C.; Lehtinen, F. K. B.

    1980-01-01

    Problem involves design of controls for linear time-invariant system disturbed by white noise. Solution is Kalman filter coupled through set of optimal regulator gains to produce desired control signal. Key to solution is solving matrix Riccati differential equation. LSOCE effectively solves problem for wide range of practical applications. Program is written in FORTRAN IV for batch execution and has been implemented on IBM 360.

  14. The linear regulator problem for parabolic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Kunisch, K.

    1983-01-01

    An approximation framework is presented for computation (in finite imensional spaces) of Riccati operators that can be guaranteed to converge to the Riccati operator in feedback controls for abstract evolution systems in a Hilbert space. It is shown how these results may be used in the linear optimal regulator problem for a large class of parabolic systems.

  15. Higher order sensitivity of solutions to convex programming problems without strict complementarity

    NASA Technical Reports Server (NTRS)

    Malanowski, Kazimierz

    1988-01-01

    Consideration is given to a family of convex programming problems which depend on a vector parameter. It is shown that the solutions of the problems and the associated Lagrange multipliers are arbitrarily many times directionally differentiable functions of the parameter, provided that the data of the problems are sufficiently regular. The characterizations of the respective derivatives are given.

  16. Numerical stability in problems of linear algebra.

    NASA Technical Reports Server (NTRS)

    Babuska, I.

    1972-01-01

    Mathematical problems are introduced as mappings from the space of input data to that of the desired output information. Then a numerical process is defined as a prescribed recurrence of elementary operations creating the mapping of the underlying mathematical problem. The ratio of the error committed by executing the operations of the numerical process (the roundoff errors) to the error introduced by perturbations of the input data (initial error) gives rise to the concept of lambda-stability. As examples, several processes are analyzed from this point of view, including, especially, old and new processes for solving systems of linear algebraic equations with tridiagonal matrices. In particular, it is shown how such a priori information can be utilized as, for instance, a knowledge of the row sums of the matrix. Information of this type is frequently available where the system arises in connection with the numerical solution of differential equations.

  17. Computational Complementarity

    NASA Astrophysics Data System (ADS)

    Finkelstein, David; Finkelstein, Shlomit Ritz

    1983-08-01

    Interactivity generates paradox in that the interactive control by one system C of predicates about another system-under-study S may falsify these predicates. We formulate an “interactive logic” to resolve this paradox of interactivity. Our construction generalizes one, the Galois connection, used by Von Neumann for the similar quantum paradox. We apply the construction to a transition system, a concept that includes general systems, automata, and quantum systems. In some (classical) automata S, the interactive predicates about S show quantumlike complementarity arising from interactivity: The interactive paradox generates the quantum paradox. Some classical S's have noncommutative algebras of interactively observable coordinates similar to the Heisenberg algebra of a quantum system. Such S's are “hidden variable” models of quantum theory not covered by the hidden variable studies of Von Neumann, Bohm, Bell, or Kochen and Specker. It is conceivable that some quantum effects in Nature arise from interactivity.

  18. Menu-Driven Solver Of Linear-Programming Problems

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.; Ferencz, D.

    1992-01-01

    Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).

  19. A multistage linear array assignment problem

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Shier, D. R.; Kincaid, R. K.; Richards, D. S.

    1988-01-01

    The implementation of certain algorithms on parallel processing computing architectures can involve partitioning contiguous elements into a fixed number of groups, each of which is to be handled by a single processor. It is desired to find an assignment of elements to processors that minimizes the sum of the maximum workloads experienced at each stage. This problem can be viewed as a multi-objective network optimization problem. Polynomially-bounded algorithms are developed for the case of two stages, whereas the associated decision problem (for an arbitrary number of stages) is shown to be NP-complete. Heuristic procedures are therefore proposed and analyzed for the general problem. Computational experience with one of the exact problems, incorporating certain pruning rules, is presented with one of the exact problems. Empirical results also demonstrate that one of the heuristic procedures is especially effective in practice.

  20. An amoeboid algorithm for solving linear transportation problem

    NASA Astrophysics Data System (ADS)

    Gao, Cai; Yan, Chao; Zhang, Zili; Hu, Yong; Mahadevan, Sankaran; Deng, Yong

    2014-03-01

    Transportation Problem (TP) is one of the basic operational research problems, which plays an important role in many practical applications. In this paper, a bio-inspired mathematical model is proposed to handle the Linear Transportation Problem (LTP) in directed networks by modifying the original amoeba model Physarum Solver. Several examples are used to prove that the provided model can effectively solve Balanced Transportation Problem (BTP), Unbalanced Transportation Problem (UTP), especially the Generalized Transportation Problem (GTP), in a nondiscrete way.

  1. Singular linear-quadratic control problem for systems with linear delay

    SciTech Connect

    Sesekin, A. N.

    2013-12-18

    A singular linear-quadratic optimization problem on the trajectories of non-autonomous linear differential equations with linear delay is considered. The peculiarity of this problem is the fact that this problem has no solution in the class of integrable controls. To ensure the existence of solutions is required to expand the class of controls including controls with impulse components. Dynamical systems with linear delay are used to describe the motion of pantograph from the current collector with electric traction, biology, etc. It should be noted that for practical problems fact singularity criterion of quality is quite commonly occurring, and therefore the study of these problems is surely important. For the problem under discussion optimal programming control contained impulse components at the initial and final moments of time is constructed under certain assumptions on the functional and the right side of the control system.

  2. Multisplitting for linear, least squares and nonlinear problems

    SciTech Connect

    Renaut, R.

    1996-12-31

    In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.

  3. Experiences with linear solvers for oil reservoir simulation problems

    SciTech Connect

    Joubert, W.; Janardhan, R.; Biswas, D.; Carey, G.

    1996-12-31

    This talk will focus on practical experiences with iterative linear solver algorithms used in conjunction with Amoco Production Company`s Falcon oil reservoir simulation code. The goal of this study is to determine the best linear solver algorithms for these types of problems. The results of numerical experiments will be presented.

  4. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    ERIC Educational Resources Information Center

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  5. Complementarity, Sets and Numbers

    ERIC Educational Resources Information Center

    Otte, M.

    2003-01-01

    Niels Bohr's term "complementarity" has been used by several authors to capture the essential aspects of the cognitive and epistemological development of scientific and mathematical concepts. In this paper we will conceive of complementarity in terms of the dual notions of extension and intension of mathematical terms. A complementarist approach…

  6. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    SciTech Connect

    Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati

    2014-06-19

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  7. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    NASA Astrophysics Data System (ADS)

    Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati

    2014-06-01

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α-. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen's method is employed to find a compromise solution, supported by illustrative numerical example.

  8. Finding Optimal Gains In Linear-Quadratic Control Problems

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Scheid, Robert E., Jr.

    1990-01-01

    Analytical method based on Volterra factorization leads to new approximations for optimal control gains in finite-time linear-quadratic control problem of system having infinite number of dimensions. Circumvents need to analyze and solve Riccati equations and provides more transparent connection between dynamics of system and optimal gain.

  9. What Is Complementarity?

    NASA Astrophysics Data System (ADS)

    Howard, Don

    2013-04-01

    Complementarity is Niels Bohr's most original contribution to the interpretation of quantum mechanics, but there is widespread confusion about complementarity in the popular literature and even in some of the serious scholarly literature on Bohr. This talk provides a historically grounded guide to Bohr's own understanding of the doctrine, emphasizing the manner in which complementarity is deeply rooted in the physics of the quantum world, in particular the physics of entanglement, and is, therefore, not just an idiosyncratic philosophical addition. Among the more specific points to be made are that complementarity is not to be confused with wave-particle duality, that it is importantly different from Heisenberg's idea of observer-induced limitations on measurability, and that it is in no way an expression of a positivist philosophical project.

  10. Towards an ideal preconditioner for linearized Navier-Stokes problems

    SciTech Connect

    Murphy, M.F.

    1996-12-31

    Discretizing certain linearizations of the steady-state Navier-Stokes equations gives rise to nonsymmetric linear systems with indefinite symmetric part. We show that for such systems there exists a block diagonal preconditioner which gives convergence in three GMRES steps, independent of the mesh size and viscosity parameter (Reynolds number). While this {open_quotes}ideal{close_quotes} preconditioner is too expensive to be used in practice, it provides a useful insight into the problem. We then consider various approximations to the ideal preconditioner, and describe the eigenvalues of the preconditioned systems. Finally, we compare these preconditioners numerically, and present our conclusions.

  11. Successive linear optimization approach to the dynamic traffic assignment problem

    SciTech Connect

    Ho, J.K.

    1980-11-01

    A dynamic model for the optimal control of traffic flow over a network is considered. The model, which treats congestion explicitly in the flow equations, gives rise to nonlinear, nonconvex mathematical programming problems. It has been shown for a piecewise linear version of this model that a global optimum is contained in the set of optimal solutions of a certain linear program. A sufficient condition for optimality which implies that a global optimum can be obtained by successively optimizing at most N + 1 objective functions for the linear program, where N is the number of time periods in the planning horizon is presented. Computational results are reported to indicate the efficiency of this approach.

  12. Regularized total least squares approach for nonconvolutional linear inverse problems.

    PubMed

    Zhu, W; Wang, Y; Galatsanos, N P; Zhang, J

    1999-01-01

    In this correspondence, a solution is developed for the regularized total least squares (RTLS) estimate in linear inverse problems where the linear operator is nonconvolutional. Our approach is based on a Rayleigh quotient (RQ) formulation of the TLS problem, and we accomplish regularization by modifying the RQ function to enforce a smooth solution. A conjugate gradient algorithm is used to minimize the modified RQ function. As an example, the proposed approach has been applied to the perturbation equation encountered in optical tomography. Simulation results show that this method provides more stable and accurate solutions than the regularized least squares and a previously reported total least squares approach, also based on the RQ formulation. PMID:18267442

  13. An analytically solvable eigenvalue problem for the linear elasticity equations.

    SciTech Connect

    Day, David Minot; Romero, Louis Anthony

    2004-07-01

    Analytic solutions are useful for code verification. Structural vibration codes approximate solutions to the eigenvalue problem for the linear elasticity equations (Navier's equations). Unfortunately the verification method of 'manufactured solutions' does not apply to vibration problems. Verification books (for example [2]) tabulate a few of the lowest modes, but are not useful for computations of large numbers of modes. A closed form solution is presented here for all the eigenvalues and eigenfunctions for a cuboid solid with isotropic material properties. The boundary conditions correspond physically to a greased wall.

  14. A nonlinear complementarity approach for the national energy modeling system

    SciTech Connect

    Gabriel, S.A.; Kydes, A.S.

    1995-03-08

    The National Energy Modeling System (NEMS) is a large-scale mathematical model that computes equilibrium fuel prices and quantities in the U.S. energy sector. At present, to generate these equilibrium values, NEMS sequentially solves a collection of linear programs and nonlinear equations. The NEMS solution procedure then incorporates the solutions of these linear programs and nonlinear equations in a nonlinear Gauss-Seidel approach. The authors describe how the current version of NEMS can be formulated as a particular nonlinear complementarity problem (NCP), thereby possibly avoiding current convergence problems. In addition, they show that the NCP format is equally valid for a more general form of NEMS. They also describe several promising approaches for solving the NCP form of NEMS based on recent Newton type methods for general NCPs. These approaches share the feature of needing to solve their direction-finding subproblems only approximately. Hence, they can effectively exploit the sparsity inherent in the NEMS NCP.

  15. Rees algebras, Monomial Subrings and Linear Optimization Problems

    NASA Astrophysics Data System (ADS)

    Dupont, Luis A.

    2010-06-01

    In this thesis we are interested in studying algebraic properties of monomial algebras, that can be linked to combinatorial structures, such as graphs and clutters, and to optimization problems. A goal here is to establish bridges between commutative algebra, combinatorics and optimization. We study the normality and the Gorenstein property-as well as the canonical module and the a-invariant-of Rees algebras and subrings arising from linear optimization problems. In particular, we study algebraic properties of edge ideals and algebras associated to uniform clutters with the max-flow min-cut property or the packing property. We also study algebraic properties of symbolic Rees algebras of edge ideals of graphs, edge ideals of clique clutters of comparability graphs, and Stanley-Reisner rings.

  16. Efficient algorithms for linear dynamic inverse problems with known motion

    NASA Astrophysics Data System (ADS)

    Hahn, B. N.

    2014-03-01

    An inverse problem is called dynamic if the object changes during the data acquisition process. This occurs e.g. in medical applications when fast moving organs like the lungs or the heart are imaged. Most regularization methods are based on the assumption that the object is static during the measuring procedure. Hence, their application in the dynamic case often leads to serious motion artefacts in the reconstruction. Therefore, an algorithm has to take into account the temporal changes of the investigated object. In this paper, a reconstruction method that compensates for the motion of the object is derived for dynamic linear inverse problems. The algorithm is validated at numerical examples from computerized tomography.

  17. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  18. Some problems in applications of the linear variational method

    NASA Astrophysics Data System (ADS)

    Pupyshev, Vladimir I.; Montgomery, H. E.

    2015-09-01

    The linear variational method is a standard computational method in quantum mechanics and quantum chemistry. As taught in most classes, the general guidance is to include as many basis functions as practical in the variational wave function. However, if it is desired to study the patterns of energy change accompanying the change of system parameters such as the shape and strength of the potential energy, the problem becomes more complicated. We use one-dimensional systems with a particle in a rectangular or in a harmonic potential confined in an infinite rectangular box to illustrate situations where a variational calculation can give incorrect results. These situations result when the energy of the lowest eigenvalue is strongly dependent on the parameters that describe the shape and strength of the potential. The numerical examples described in this work are provided as cautionary notes for practitioners of numerical variational calculations.

  19. First integrals for the Kepler problem with linear drag

    NASA Astrophysics Data System (ADS)

    Margheri, Alessandro; Ortega, Rafael; Rebelo, Carlota

    2016-07-01

    In this work we consider the Kepler problem with linear drag, and prove the existence of a continuous vector-valued first integral, obtained taking the limit as t→ +∞ of the Runge-Lenz vector. The norm of this first integral can be interpreted as an asymptotic eccentricity e_{∞} with 0≤ e_{∞} ≤ 1 . The orbits satisfying e_{∞} <1 approach the singularity by an elliptic spiral and the corresponding solutions x(t)=r(t)e^{iθ (t)} have a norm r(t) that goes to zero like a negative exponential and an argument θ (t) that goes to infinity like a positive exponential. In particular, the difference between consecutive times of passage through the pericenter, say T_{n+1} -T_n , goes to zero as 1/n.

  20. Complementarity is not enough

    NASA Astrophysics Data System (ADS)

    Bousso, Raphael

    2013-06-01

    The near-horizon field B of an old black hole is maximally entangled with the early Hawking radiation R, by unitarity of the S-matrix. But B must be maximally entangled with the black hole interior A, by the equivalence principle. Causal patch complementarity fails to reconcile these conflicting requirements. The system B can be probed by a freely falling observer while there is still time to turn around and remain outside the black hole. Therefore, the entangled state of the BR system is dictated by unitarity even in the infalling patch. If, by monogamy of entanglement, B is not entangled with A, the horizon is replaced by a singularity or “firewall.” To illustrate the radical nature of the ideas that are needed, I briefly discuss two approaches for avoiding a firewall: the identification of A with a subsystem of R; and a combination of patch complementarity with the Horowitz-Maldacena final-state proposal.

  1. Application of fractional derivative models in linear viscoelastic problems

    NASA Astrophysics Data System (ADS)

    Sasso, M.; Palmieri, G.; Amodio, D.

    2011-11-01

    Appropriate knowledge of viscoelastic properties of polymers and elastomers is of fundamental importance for a correct modelization and analysis of structures where such materials are present, especially when dealing with dynamic and vibration problems. In this paper experimental results of a series of compression and tension tests on specimens of styrene-butadiene rubber and polypropylene plastic are presented; tests consist of creep and relaxation tests, as well as cyclic loading at different frequencies. Experimental data are then used to calibrate some linear viscoelastic models; besides the classical approach based on a combination in series or parallel of standard mechanical elements as springs and dashpots, particular emphasis is given to the application of models whose constitutive equations are based on differential equations of fractional order (Fractional Derivative Model). The two approaches are compared analyzing their capability to reproduce all the experimental data for given materials; also, the main computational issues related with these models are addressed, and the advantage of using a limited number of parameters is demonstrated.

  2. Complementarity and quantum walks

    SciTech Connect

    Kendon, Viv; Sanders, Barry C.

    2005-02-01

    We show that quantum walks interpolate between a coherent 'wave walk' and a random walk depending on how strongly the walker's coin state is measured; i.e., the quantum walk exhibits the quintessentially quantum property of complementarity, which is manifested as a tradeoff between knowledge of which path the walker takes vs the sharpness of the interference pattern. A physical implementation of a quantum walk (the quantum quincunx) should thus have an identifiable walker and the capacity to demonstrate the interpolation between wave walk and random walk depending on the strength of measurement.

  3. The Intelligence of Dual Simplex Method to Solve Linear Fractional Fuzzy Transportation Problem

    PubMed Central

    Narayanamoorthy, S.; Kalyani, S.

    2015-01-01

    An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example. PMID:25810713

  4. LP-DIT interchange tool for linear programming problems

    SciTech Connect

    Makowski, M.

    1994-12-31

    LP-DIT is a small library that provides an easy handling of LP problem data between a problem generator, solver and other modules (problem modification, generation of multi-criteria problem, report writers, etc). So far LP-DIT has been implemented with 4 LP (including one MIP) solvers and is being used as a module for model-based Decision Support System. LP-DIT will be released as a public domain soft-ware in the coming weeks.

  5. Multigrid approaches to non-linear diffusion problems on unstructured meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    The efficiency of three multigrid methods for solving highly non-linear diffusion problems on two-dimensional unstructured meshes is examined. The three multigrid methods differ mainly in the manner in which the nonlinearities of the governing equations are handled. These comprise a non-linear full approximation storage (FAS) multigrid method which is used to solve the non-linear equations directly, a linear multigrid method which is used to solve the linear system arising from a Newton linearization of the non-linear system, and a hybrid scheme which is based on a non-linear FAS multigrid scheme, but employs a linear solver on each level as a smoother. Results indicate that all methods are equally effective at converging the non-linear residual in a given number of grid sweeps, but that the linear solver is more efficient in cpu time due to the lower cost of linear versus non-linear grid sweeps.

  6. Problems with the linear q-Fokker Planck equation

    NASA Astrophysics Data System (ADS)

    Yano, Ryosuke

    2015-05-01

    In this letter, we discuss the linear q-Fokker Planck equation, whose solution follows Tsallis distribution, from the viewpoint of kinetic theory. Using normal definitions of moments, we can expand the distribution function with infinite moments for 0 ⩽ q < 1, whereas we cannot expand the distribution function with infinite moments for 1 < q owing to emergences of characteristic points in moments. From Grad's 13 moment equations for the linear q-Fokker Planck equation, the dissipation rate of the heat flux via the linear q-Fokker Planck equation diverges at 0 ⩽ q < 2/3. In other words, the thermal conductivity, which defines the heat flux with the spatial gradient of the temperature and the thermal conductivity, which defines the heat flux with the spacial gradient of the density, jumps to zero at q = 2/3, discontinuously.

  7. Fixed Point Problems for Linear Transformations on Pythagorean Triples

    ERIC Educational Resources Information Center

    Zhan, M.-Q.; Tong, J.-C.; Braza, P.

    2006-01-01

    In this article, an attempt is made to find all linear transformations that map a standard Pythagorean triple (a Pythagorean triple [x y z][superscript T] with y being even) into a standard Pythagorean triple, which have [3 4 5][superscript T] as their fixed point. All such transformations form a monoid S* under matrix product. It is found that S*…

  8. A linear regression solution to the spatial autocorrelation problem

    NASA Astrophysics Data System (ADS)

    Griffith, Daniel A.

    The Moran Coefficient spatial autocorrelation index can be decomposed into orthogonal map pattern components. This decomposition relates it directly to standard linear regression, in which corresponding eigenvectors can be used as predictors. This paper reports comparative results between these linear regressions and their auto-Gaussian counterparts for the following georeferenced data sets: Columbus (Ohio) crime, Ottawa-Hull median family income, Toronto population density, southwest Ohio unemployment, Syracuse pediatric lead poisoning, and Glasgow standard mortality rates, and a small remotely sensed image of the High Peak district. This methodology is extended to auto-logistic and auto-Poisson situations, with selected data analyses including percentage of urban population across Puerto Rico, and the frequency of SIDs cases across North Carolina. These data analytic results suggest that this approach to georeferenced data analysis offers considerable promise.

  9. Solution of the linear regression problem using matrix correction methods in the l 1 metric

    NASA Astrophysics Data System (ADS)

    Gorelik, V. A.; Trembacheva (Barkalova), O. S.

    2016-02-01

    The linear regression problem is considered as an improper interpolation problem. The metric l 1 is used to correct (approximate) all the initial data. A probabilistic justification of this metric in the case of the exponential noise distribution is given. The original improper interpolation problem is reduced to a set of a finite number of linear programming problems. The corresponding computational algorithms are implemented in MATLAB.

  10. Linear stability for some symmetric periodic simultaneous binary collision orbits in the four-body problem

    NASA Astrophysics Data System (ADS)

    Bakker, Lennard F.; Ouyang, Tiancheng; Yan, Duokui; Simmons, Skyler; Roberts, Gareth E.

    2010-10-01

    We apply the analytic-numerical method of Roberts to determine the linear stability of time-reversible periodic simultaneous binary collision orbits in the symmetric collinear four-body problem with masses 1, m, m, 1, and also in a symmetric planar four-body problem with equal masses. In both problems, the assumed symmetries reduce the determination of linear stability to the numerical computation of a single real number. For the collinear problem, this verifies the earlier numerical results of Sweatman for linear stability with respect to collinear and symmetric perturbations.

  11. Linear quadratic tracking problems in Hilbert space - Application to optimal active noise suppression

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.

    1989-01-01

    A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.

  12. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient. PMID:27547676

  13. Towards Resolving the Crab Sigma-Problem: A Linear Accelerator?

    NASA Technical Reports Server (NTRS)

    Contopoulos, Ioannis; Kazanas, Demosthenes; White, Nicholas E. (Technical Monitor)

    2002-01-01

    Using the exact solution of the axisymmetric pulsar magnetosphere derived in a previous publication and the conservation laws of the associated MHD flow, we show that the Lorentz factor of the outflowing plasma increases linearly with distance from the light cylinder. Therefore, the ratio of the Poynting to particle energy flux, generically referred to as sigma, decreases inversely proportional to distance, from a large value (typically approx. greater than 10(exp 4)) near the light cylinder to sigma approx. = 1 at a transition distance R(sub trans). Beyond this distance the inertial effects of the outflowing plasma become important and the magnetic field geometry must deviate from the almost monopolar form it attains between R(sub lc), and R(sub trans). We anticipate that this is achieved by collimation of the poloidal field lines toward the rotation axis, ensuring that the magnetic field pressure in the equatorial region will fall-off faster than 1/R(sup 2) (R being the cylindrical radius). This leads both to a value sigma = a(sub s) much less than 1 at the nebular reverse shock at distance R(sub s) (R(sub s) much greater than R(sub trans)) and to a component of the flow perpendicular to the equatorial component, as required by observation. The presence of the strong shock at R = R(sub s) allows for the efficient conversion of kinetic energy into radiation. We speculate that the Crab pulsar is unique in requiring sigma(sub s) approx. = 3 x 10(exp -3) because of its small translational velocity, which allowed for the shock distance R(sub s) to grow to values much greater than R(sub trans).

  14. An application of a linear programing technique to nonlinear minimax problems

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1973-01-01

    A differential correction technique for solving nonlinear minimax problems is presented. The basis of the technique is a linear programing algorithm which solves the linear minimax problem. By linearizing the original nonlinear equations about a nominal solution, both nonlinear approximation and estimation problems using the minimax norm may be solved iteratively. Some consideration is also given to improving convergence and to the treatment of problems with more than one measured quantity. A sample problem is treated with this technique and with the least-squares differential correction method to illustrate the properties of the minimax solution. The results indicate that for the sample approximation problem, the minimax technique provides better estimates than the least-squares method if a sufficient amount of data is used. For the sample estimation problem, the minimax estimates are better if the mathematical model is incomplete.

  15. Solving large-scale sparse eigenvalue problems and linear systems of equations for accelerator modeling

    SciTech Connect

    Gene Golub; Kwok Ko

    2009-03-30

    The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.

  16. Global symmetry relations in linear and viscoplastic mobility problems

    NASA Astrophysics Data System (ADS)

    Kamrin, Ken; Goddard, Joe

    2014-11-01

    The mobility tensor of a textured surface is a homogenized effective boundary condition that describes the effective slip of a fluid adjacent to the surface in terms of an applied shear traction far above the surface. In the Newtonian fluid case, perturbation analysis yields a mobility tensor formula, which suggests that regardless of the surface texture (i.e. nonuniform hydrophobicity distribution and/or height fluctuations) the mobility tensor is always symmetric. This conjecture is verified using a Lorentz reciprocity argument. It motivates the question of whether such symmetries would arise for nonlinear constitutive relations and boundary conditions, where the mobility tensor is not a constant but a function of the applied stress. We show that in the case of a strongly dissipative nonlinear constitutive relation--one whose strain-rate relates to the stress solely through a scalar Edelen potential--and strongly dissipative surface boundary conditions--one whose hydrophobic character is described by a potential relating slip to traction--the mobility function of the surface also maintains tensorial symmetry. By extension, the same variational arguments can be applied in problems such as the permeability tensor for viscoplastic flow through porous media, and we find that similar symmetries arise. These findings could be used to simplify the characterization of viscoplastic drag in various anisotropic media. (Joe Goddard is a former graduate student of Acrivos).

  17. Solution algorithms for non-linear singularly perturbed optimal control problems

    NASA Technical Reports Server (NTRS)

    Ardema, M. D.

    1983-01-01

    The applicability and usefulness of several classical and other methods for solving the two-point boundary-value problem which arises in non-linear singularly perturbed optimal control are assessed. Specific algorithms of the Picard, Newton and averaging types are formally developed for this class of problem. The computational requirements associated with each algorithm are analysed and compared with the computational requirement of the method of matched asymptotic expansions. Approximate solutions to a linear and a non-linear problem are obtained by each method and compared.

  18. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  19. New computational approach for the linearized scalar potential formulation of the magnetostatic field problem

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.

    1981-01-01

    The linearized scalar potential formulation of the magnetostatic field problem is considered. The approach involves a reformulation of the continuous problem as a parametric boundary problem. By the introduction of a spherical interface and the use of spherical harmonics, the infinite boundary condition can also be satisfied in the parametric framework. The reformulated problem is discretized by finite element techniques and a discrete parametric problem is solved by conjugate gradient iteration. This approach decouples the problem in that only standard Neumann type elliptic finite element systems on separate bounded domains need be solved. The boundary conditions at infinity and the interface conditions are satisfied during the boundary parametric iteration.

  20. Numerical approximation for the infinite-dimensional discrete-time optimal linear-quadratic regulator problem

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1986-01-01

    An abstract approximation framework is developed for the finite and infinite time horizon discrete-time linear-quadratic regulator problem for systems whose state dynamics are described by a linear semigroup of operators on an infinite dimensional Hilbert space. The schemes included the framework yield finite dimensional approximations to the linear state feedback gains which determine the optimal control law. Convergence arguments are given. Examples involving hereditary and parabolic systems and the vibration of a flexible beam are considered. Spline-based finite element schemes for these classes of problems, together with numerical results, are presented and discussed.

  1. Exact solution to the curve crossing problems of two linear diabatic potentials by transfer matrix method

    NASA Astrophysics Data System (ADS)

    Diwaker; Chakraborty, Aniruddha

    2015-12-01

    In the present work we have reported a simple exact analytical solution to the curve crossing problem of two linear diabatic potentials by transfer matrix method. Our problem assumes the crossing of two linear diabatic potentials which are coupled to each other by an arbitrary coupling (in contrast to linear potentials in the vicinity of crossing points) and for numerical calculation purposes this arbitrary coupling is taken as Gaussian coupling which is further expressed as a collection of Dirac delta functions. Further we calculated the transition probability from one diabatic potential to another by the use of this method.

  2. Newton's method for large bound-constrained optimization problems.

    SciTech Connect

    Lin, C.-J.; More, J. J.; Mathematics and Computer Science

    1999-01-01

    We analyze a trust region version of Newton's method for bound-constrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearly constrained problems and yields global and superlinear convergence without assuming either strict complementarity or linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large bound-constrained problems.

  3. Illusion of Linearity in Geometry: Effect in Multiple-Choice Problems

    ERIC Educational Resources Information Center

    Vlahovic-Stetic, Vesna; Pavlin-Bernardic, Nina; Rajter, Miroslav

    2010-01-01

    The aim of this study was to examine if there is a difference in the performance on non-linear problems regarding age, gender, and solving situation, and whether the multiple-choice answer format influences students' thinking. A total of 112 students, aged 15-16 and 18-19, were asked to solve problems for which solutions based on proportionality…

  4. Fast and Robust Newton strategies for non-linear geodynamics problems

    NASA Astrophysics Data System (ADS)

    Le Pourhiet, Laetitia; May, Dave

    2014-05-01

    Geodynamic problems are inherently non-linear, with sources of non-inearities arising from the (i) rheology, (ii) boundary conditions and (iii) the choice of time integration scheme. We have developed a robust non-linear scheme utilizing PETSc's non-linear solver framework; SNES. Through the SNES framework, we have access to a wide range of globalization techniques. In this work we extensively use line search implementation. We explored a wide range different strategies for solving a variety of non-linear problems specific to geodynamics. In this presentation, we report of the most robust line-searching techniques which we have found for the three classes of non-linearities previously identified. Among the class of rheological non-linearities, the shear banding instability using visco-plastic flow rules is the most difficult to solve. Distinctively from its sibling, the elasto-plastic rheology, the visco-plastic rheology causes instantaneous shear localisation. As a results, decreasing time-stepping is not a viable approach to better capture the initial phase of localisation. Furthermore, return map algorithms based on a consistent tangent cannot be used as the slope of the tangent is infinite. Obtaining a converged non-linear solution to this problem only relies on the robustness non-linear solver. After presenting a Newton methodology suitable for rheological non-linearities, we examine the performance of this formulation when frictional sliding boundary conditions are introduced. We assess the robustness of the non-linear solver when applied to critical taper type problems.

  5. Digital program for solving the linear stochastic optimal control and estimation problem

    NASA Technical Reports Server (NTRS)

    Geyser, L. C.; Lehtinen, B.

    1975-01-01

    A computer program is described which solves the linear stochastic optimal control and estimation (LSOCE) problem by using a time-domain formulation. The LSOCE problem is defined as that of designing controls for a linear time-invariant system which is disturbed by white noise in such a way as to minimize a performance index which is quadratic in state and control variables. The LSOCE problem and solution are outlined; brief descriptions are given of the solution algorithms, and complete descriptions of each subroutine, including usage information and digital listings, are provided. A test case is included, as well as information on the IBM 7090-7094 DCS time and storage requirements.

  6. Two-scale homogenization of electromechanically coupled boundary value problems. Consistent linearization and applications

    NASA Astrophysics Data System (ADS)

    Schröder, Jörg; Keip, Marc-André

    2012-08-01

    The contribution addresses a direct micro-macro transition procedure for electromechanically coupled boundary value problems. The two-scale homogenization approach is implemented into a so-called FE2-method which allows for the computation of macroscopic boundary value problems in consideration of microscopic representative volume elements. The resulting formulation is applicable to the computation of linear as well as nonlinear problems. In the present paper, linear piezoelectric as well as nonlinear electrostrictive material behavior are investigated, where the constitutive equations on the microscale are derived from suitable thermodynamic potentials. The proposed direct homogenization procedure can also be applied for the computation of effective elastic, piezoelectric, dielectric, and electrostrictive material properties.

  7. On high-continuity transfinite element formulations for linear-nonlinear transient thermal problems

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1987-01-01

    This paper describes recent developments in the applicability of a hybrid transfinite element methodology with emphasis on high-continuity formulations for linear/nonlinear transient thermal problems. The proposed concepts furnish accurate temperature distributions and temperature gradients making use of a relatively smaller number of degrees of freedom; and the methodology is applicable to linear/nonlinear thermal problems. Characteristic features of the formulations are described in technical detail as the proposed hybrid approach combines the major advantages and modeling features of high-continuity thermal finite elements in conjunction with transform methods and classical Galerkin schemes. Several numerical test problems are evaluated and the results obtained validate the proposed concepts for linear/nonlinear thermal problems.

  8. Some comparison of restarted GMRES and QMR for linear and nonlinear problems

    SciTech Connect

    Morgan, R.; Joubert, W.

    1994-12-31

    Comparisons are made between the following methods: QMR including its transpose-free version, restarted GMRES, and a modified restarted GMRES that uses approximate eigenvectors to improve convergence, For some problems, the modified GMRES is competitive with or better than QMR in terms of the number of matrix-vector products. Also, the GMRES methods can be much better when several similar systems of linear equations must be solved, as in the case of nonlinear problems and ODE problems.

  9. A nonlinear singular eigenvalue problem for a linear system of ordinary differential equations with redundant conditions

    NASA Astrophysics Data System (ADS)

    Abramov, A. A.; Yukhno, L. F.

    2016-07-01

    A nonlinear eigenvalue problem for a linear system of ordinary differential equations is examined on a semi-infinite interval. The problem is supplemented by nonlocal conditions specified by a Stieltjes integral. At infinity, the solution must be bounded. In addition to these basic conditions, the solution must satisfy certain redundant conditions, which are also nonlocal. A numerically stable method for solving such a singular overdetermined eigenvalue problem is proposed and analyzed. The essence of the method is that this overdetermined problem is replaced by an auxiliary problem consistent with all the above conditions.

  10. Initial-value problem for a linear ordinary differential equation of noninteger order

    SciTech Connect

    Pskhu, Arsen V

    2011-04-30

    An initial-value problem for a linear ordinary differential equation of noninteger order with Riemann-Liouville derivatives is stated and solved. The initial conditions of the problem ensure that (by contrast with the Cauchy problem) it is uniquely solvable for an arbitrary set of parameters specifying the orders of the derivatives involved in the equation; these conditions are necessary for the equation under consideration. The problem is reduced to an integral equation; an explicit representation of the solution in terms of the Wright function is constructed. As a consequence of these results, necessary and sufficient conditions for the solvability of the Cauchy problem are obtained. Bibliography: 7 titles.

  11. Fast pricing of American options by linear programming

    SciTech Connect

    Dempster, M.; Hutton, J.P.

    1994-12-31

    This paper describes a new method for computation of the value of various American options on underlying dividend bearing securities under standard Black-Scholes assumptions. It is well known that the problem of valuing the American put can be expressed as solving an abstract linear complementarity problem in terms of a parabolic partial differential operator. Generalizing earlier work of Cryer, Dempster and Borwein for elliptic operators, we show that the American put option value function is the solution of an abstract linear programme bounded by the payoff at exercise. Different American options require only different payoff function bounds. Standard finite difference or finite element approximations to the complementarity problem lead to ordinary linear programmes. We report promising computational results for several American option types using IBM`s Optimization System Library on an RS6000/590.

  12. Averaging and Linear Programming in Some Singularly Perturbed Problems of Optimal Control

    SciTech Connect

    Gaitsgory, Vladimir; Rossomakhine, Sergey

    2015-04-15

    The paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed (SP) optimal controls problems (that is, problems of optimal control of SP systems) considered on the infinite time horizon. We mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well. Our consideration is based on earlier results on averaging of SP control systems and on linear programming formulations of optimal control problems. The idea that we exploit is to first asymptotically approximate a given problem of optimal control of the SP system by a certain averaged optimal control problem, then reformulate this averaged problem as an infinite-dimensional linear programming (LP) problem, and then approximate the latter by semi-infinite LP problems. We show that the optimal solution of these semi-infinite LP problems and their duals (that can be found with the help of a modification of an available LP software) allow one to construct near optimal controls of the SP system. We demonstrate the construction with two numerical examples.

  13. A strictly improving linear programming alorithm based on a series of Phase 1 problems

    SciTech Connect

    Leichner, S.A.; Dantzig, G.B.; Davis, J.W.

    1992-04-01

    When used on degenerate problems, the simplex method often takes a number of degenerate steps at a particular vertex before moving to the next. In theory (although rarely in practice), the simplex method can actually cycle at such a degenerate point. Instead of trying to modify the simplex method to avoid degenerate steps, we have developed a new linear programming algorithm that is completely impervious to degeneracy. This new method solves the Phase II problem of finding an optimal solution by solving a series of Phase I feasibility problems. Strict improvement is attained at each iteration in the Phase I algorithm, and the Phase II sequence of feasibility problems has linear convergence in the number of Phase I problems. When tested on the 30 smallest NETLIB linear programming test problems, the computational results for the new Phase II algorithm were over 15% faster than the simplex method; on some problems, it was almost two times faster, and on one problem it was four times faster.

  14. An iterative method to solve the heat transfer problem under the non-linear boundary conditions

    NASA Astrophysics Data System (ADS)

    Zhu, Zhenggang; Kaliske, Michael

    2012-02-01

    The aim of the paper is to determine the approximation of the tangential matrix for solving the non-linear heat transfer problem. Numerical model of the strongly non-linear heat transfer problem based on the theory of the finite element method is presented. The tangential matrix of the Newton method is formulated. A method to solve the heat transfer with the non-linear boundary conditions, based on the secant slope of a reference function, is developed. The contraction mapping principle is introduced to verify the convergence of this method. The application of the method is shown by two examples. Numerical results of these examples are comparable to the ones solved with the Newton method and the commercial software COMSOL for the heat transfer problem under the radiative boundary conditions.

  15. Complementarity in Categorical Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Heunen, Chris

    2012-07-01

    We relate notions of complementarity in three layers of quantum mechanics: (i) von Neumann algebras, (ii) Hilbert spaces, and (iii) orthomodular lattices. Taking a more general categorical perspective of which the above are instances, we consider dagger monoidal kernel categories for (ii), so that (i) become (sub)endohomsets and (iii) become subobject lattices. By developing a `point-free' definition of copyability we link (i) commutative von Neumann subalgebras, (ii) classical structures, and (iii) Boolean subalgebras.

  16. Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem

    SciTech Connect

    Yoo, Jaechil

    1996-12-31

    Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.

  17. Weighted linear least squares problem: an interval analysis approach to rank determination

    SciTech Connect

    Manteuffel, T. A.

    1980-08-01

    This is an extension of the work in SAND--80-0655 to the weighted linear least squares problem. Given the weighted linear least squares problem WAx approx. = Wb, where W is a diagonal weighting matrix, and bounds on the uncertainty in the elements of A, we define an interval matrix A/sup I/ that contains all perturbations of A due to these uncertainties and say that the problem is rank deficient if any member of A/sup I/ is rank deficient. It is shown that, if WA = QR is the QR decomposition of WA, then Q and R/sup -1/ can be used to bound the rank of A/sup I/. A modification of the Modified Gram--Schmidt QR decomposition yields an algorithm that implements these results. The extra arithmetic is 0(MN). Numerical results show the algorithm to be effective on problems in which the weights vary greatly in magnitude.

  18. The Use of the Fourier Transform for Solving Linear Elasticity Problems

    NASA Astrophysics Data System (ADS)

    Kozubek, Tomas; Mocek, Lukas

    2011-11-01

    This paper deals with solving linear elasticity problems using a modified fictitious domain method and an effective solver based on the discrete Fourier transform and the Schur complement reduction in combination with the null space method. The main goal is to show step by step all ingredients of the numerical solution.

  19. Efficient Solvers for Linear Elasticity Problems Based on the Discrete Fourier Transform and TFETI Decomposition

    NASA Astrophysics Data System (ADS)

    Mocek, Lukas; Kozubek, Tomas

    2011-09-01

    The paper deals with the numerical solution of elliptic boundary value problems for 2D linear elasticity using the fictitious domain method in combination with the discrete Fourier transform and the FETI domain decomposition. We briefly mention the theoretical background of these methods, introduce resulting solvers, and demonstrate their efficiency on model benchmarks.

  20. Linear Integro-differential Schroedinger and Plate Problems Without Initial Conditions

    SciTech Connect

    Lorenzi, Alfredo

    2013-06-15

    Via Carleman's estimates we prove uniqueness and continuous dependence results for the temporal traces of solutions to overdetermined linear ill-posed problems related to Schroedinger and plate equation. The overdetermination is prescribed in an open subset of the (space-time) lateral boundary.

  1. The problem of scheduling for the linear section of a single-track railway

    NASA Astrophysics Data System (ADS)

    Akimova, Elena N.; Gainanov, Damir N.; Golubev, Oleg A.; Kolmogortsev, Ilya D.; Konygin, Anton V.

    2016-06-01

    The paper is devoted to the problem of scheduling for the linear section of a single-track railway: how to organize the flow in both directions in the most efficient way. In this paper, the authors propose an algorithm for scheduling, examine the properties of this algorithm and perform the computational experiments.

  2. High Order Finite Difference Methods, Multidimensional Linear Problems and Curvilinear Coordinates

    NASA Technical Reports Server (NTRS)

    Nordstrom, Jan; Carpenter, Mark H.

    1999-01-01

    Boundary and interface conditions are derived for high order finite difference methods applied to multidimensional linear problems in curvilinear coordinates. The boundary and interface conditions lead to conservative schemes and strict and strong stability provided that certain metric conditions are met.

  3. Typical Behavior of the Linear Programming Method for Combinatorial Optimization Problems: A Statistical-Mechanical Perspective

    NASA Astrophysics Data System (ADS)

    Takabe, Satoshi; Hukushima, Koji

    2014-04-01

    The typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover problem, which is a type of integer programming (IP) problem. To deal with LP and IP using statistical mechanics, a lattice-gas model on the Erdös-Rényi random graphs is analyzed by a replica method. It is found that the LP optimal solution is typically equal to that given by IP below the critical average degree c*=e in the thermodynamic limit. The critical threshold for LP = IP extends the previous result c = 1, and coincides with the replica symmetry-breaking threshold of the IP.

  4. Geometric tools for solving the FDI problem for linear periodic discrete-time systems

    NASA Astrophysics Data System (ADS)

    Longhi, Sauro; Monteriù, Andrea

    2013-07-01

    This paper studies the problem of detecting and isolating faults in linear periodic discrete-time systems. The aim is to design an observer-based residual generator where each residual is sensitive to one fault, whilst remaining insensitive to the other faults that can affect the system. Making use of the geometric tools, and in particular of the outer observable subspace notion, the Fault Detection and Isolation (FDI) problem is formulated and necessary and solvability conditions are given. An algorithmic procedure is described to determine the solution of the FDI problem.

  5. Comparison of the Tangent Linear Properties of Tracer Transport Schemes Applied to Geophysical Problems.

    NASA Technical Reports Server (NTRS)

    Kent, James; Holdaway, Daniel

    2015-01-01

    A number of geophysical applications require the use of the linearized version of the full model. One such example is in numerical weather prediction, where the tangent linear and adjoint versions of the atmospheric model are required for the 4DVAR inverse problem. The part of the model that represents the resolved scale processes of the atmosphere is known as the dynamical core. Advection, or transport, is performed by the dynamical core. It is a central process in many geophysical applications and is a process that often has a quasi-linear underlying behavior. However, over the decades since the advent of numerical modelling, significant effort has gone into developing many flavors of high-order, shape preserving, nonoscillatory, positive definite advection schemes. These schemes are excellent in terms of transporting the quantities of interest in the dynamical core, but they introduce nonlinearity through the use of nonlinear limiters. The linearity of the transport schemes used in Goddard Earth Observing System version 5 (GEOS-5), as well as a number of other schemes, is analyzed using a simple 1D setup. The linearized version of GEOS-5 is then tested using a linear third order scheme in the tangent linear version.

  6. The linearized characteristics method and its application to practical nonlinear supersonic problems

    NASA Technical Reports Server (NTRS)

    Ferri, Antonio

    1952-01-01

    The methods of characteristics has been linearized by assuming that the flow field can be represented as a basic flow field determined by nonlinearized methods and a linearized superposed flow field that accounts for small changes of boundary conditions. The method has been applied to two-dimensional rotational flow where the basic flow is potential flow and to axially symmetric problems where conical flows have been used as the basic flows. In both cases the method allows the determination of the flow field to be simplified and the numerical work to be reduced to a few calculations. The calculations of axially symmetric flow can be simplified if tabulated values of some coefficients of the conical flow are obtained. The method has also been applied to slender bodies without symmetry and to some three-dimensional wing problems where two-dimensional flow can be used as the basic flow. Both problems were unsolved before in the approximation of nonlinear flow.

  7. Voila: A visual object-oriented iterative linear algebra problem solving environment

    SciTech Connect

    Edwards, H.C.; Hayes, L.J.

    1994-12-31

    Application of iterative methods to solve a large linear system of equations currently involves writing a program which calls iterative method subprograms from a large software package. These subprograms have complex interfaces which are difficult to use and even more difficult to program. A problem solving environment specifically tailored to the development and application of iterative methods is needed. This need will be fulfilled by Voila, a problem solving environment which provides a visual programming interface to object-oriented iterative linear algebra kernels. Voila will provide several quantum improvements over current iterative method problem solving environments. First, programming and applying iterative methods is considerably simplified through Voila`s visual programming interface. Second, iterative method algorithm implementations are independent of any particular sparse matrix data structure through Voila`s object-oriented kernels. Third, the compile-link-debug process is eliminated as Voila operates as an interpreter.

  8. Genetic algorithms: An evolution from Monte Carlo Methods for strongly non-linear geophysical optimization problems

    NASA Astrophysics Data System (ADS)

    Gallagher, Kerry; Sambridge, Malcolm; Drijkoningen, Guy

    In providing a method for solving non-linear optimization problems Monte Carlo techniques avoid the need for linearization but, in practice, are often prohibitive because of the large number of models that must be considered. A new class of methods known as Genetic Algorithms have recently been devised in the field of Artificial Intelligence. We outline the basic concept of genetic algorithms and discuss three examples. We show that, in locating an optimal model, the new technique is far superior in performance to Monte Carlo techniques in all cases considered. However, Monte Carlo integration is still regarded as an effective method for the subsequent model appraisal.

  9. Stable computation of search directions for near-degenerate linear programming problems

    SciTech Connect

    Hough, P.D.

    1997-03-01

    In this paper, we examine stability issues that arise when computing search directions ({delta}x, {delta}y, {delta} s) for a primal-dual path-following interior point method for linear programming. The dual step {delta}y can be obtained by solving a weighted least-squares problem for which the weight matrix becomes extremely il conditioned near the boundary of the feasible region. Hough and Vavisis proposed using a type of complete orthogonal decomposition (the COD algorithm) to solve such a problem and presented stability results. The work presented here addresses the stable computation of the primal step {delta}x and the change in the dual slacks {delta}s. These directions can be obtained in a straight-forward manner, but near-degeneracy in the linear programming instance introduces ill-conditioning which can cause numerical problems in this approach. Therefore, we propose a new method of computing {delta}x and {delta}s. More specifically, this paper describes and orthogonal projection algorithm that extends the COD method. Unlike other algorithms, this method is stable for interior point methods without assuming nondegeneracy in the linear programming instance. Thus, it is more general than other algorithms on near-degenerate problems.

  10. Linear Stability of Elliptic Lagrangian Solutions of the Planar Three-Body Problem via Index Theory

    NASA Astrophysics Data System (ADS)

    Hu, Xijun; Long, Yiming; Sun, Shanzhong

    2014-09-01

    It is well known that the linear stability of Lagrangian elliptic equilateral triangle homographic solutions in the classical planar three-body problem depends on the mass parameter and the eccentricity . We are not aware of any existing analytical method which relates the linear stability of these solutions to the two parameters directly in the full rectangle [0, 9] × [0, 1), aside from perturbation methods for e > 0 small enough, blow-up techniques for e sufficiently close to 1, and numerical studies. In this paper, we introduce a new rigorous analytical method to study the linear stability of these solutions in terms of the two parameters in the full ( β, e) range [0, 9] × [0, 1) via the ω-index theory of symplectic paths for ω belonging to the unit circle of the complex plane, and the theory of linear operators. After establishing the ω-index decreasing property of the solutions in β for fixed , we prove the existence of three curves located from left to right in the rectangle [0, 9] × [0, 1), among which two are -1 degeneracy curves and the third one is the right envelope curve of the ω-degeneracy curves, and show that the linear stability pattern of such elliptic Lagrangian solutions changes if and only if the parameter ( β, e) passes through each of these three curves. Interesting symmetries of these curves are also observed. The linear stability of the singular case when the eccentricity e approaches 1 is also analyzed in detail.

  11. Evaluation of boundary element methods for the EEG forward problem: Effect of linear interpolation

    SciTech Connect

    Schlitt, H.A.; Heller, L.; Best, E.; Ranken, D.M. ); Aaron, R. )

    1995-01-01

    We implement the approach for solving the boundary integral equation for the electroencephalography (EEG) forward problem proposed by de Munck, in which the electric potential varies linearly across each plane triangle of the mesh. Previous solutions have assumed the potential is constant across an element. We calculate the electric potential and systematically investigate the effect of different mesh choices and dipole locations by using a three concentric sphere head model for which there is an analytic solution. Implementing the linear interpolation approximation results in errors that are approximately half those of the same mesh when the potential is assumed to be constant, and provides a reliable method for solving the problem. 12 refs., 8 figs.

  12. Solving deterministic non-linear programming problem using Hopfield artificial neural network and genetic programming techniques

    NASA Astrophysics Data System (ADS)

    Vasant, P.; Ganesan, T.; Elamvazuthi, I.

    2012-11-01

    A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.

  13. A linear semi-infinite programming strategy for constructing optimal wavelet transforms in multivariate calibration problems.

    PubMed

    Coelho, Clarimar José; Galvão, Roberto K H; de Araújo, Mário César U; Pimentel, Maria Fernanda; da Silva, Edvan Cirino

    2003-01-01

    A novel strategy for the optimization of wavelet transforms with respect to the statistics of the data set in multivariate calibration problems is proposed. The optimization follows a linear semi-infinite programming formulation, which does not display local maxima problems and can be reproducibly solved with modest computational effort. After the optimization, a variable selection algorithm is employed to choose a subset of wavelet coefficients with minimal collinearity. The selection allows the building of a calibration model by direct multiple linear regression on the wavelet coefficients. In an illustrative application involving the simultaneous determination of Mn, Mo, Cr, Ni, and Fe in steel samples by ICP-AES, the proposed strategy yielded more accurate predictions than PCR, PLS, and nonoptimized wavelet regression. PMID:12767151

  14. Solution of second order quasi-linear boundary value problems by a wavelet method

    SciTech Connect

    Zhang, Lei; Zhou, Youhe; Wang, Jizeng

    2015-03-10

    A wavelet Galerkin method based on expansions of Coiflet-like scaling function bases is applied to solve second order quasi-linear boundary value problems which represent a class of typical nonlinear differential equations. Two types of typical engineering problems are selected as test examples: one is about nonlinear heat conduction and the other is on bending of elastic beams. Numerical results are obtained by the proposed wavelet method. Through comparing to relevant analytical solutions as well as solutions obtained by other methods, we find that the method shows better efficiency and accuracy than several others, and the rate of convergence can even reach orders of 5.8.

  15. The conjugate gradient method for linear ill-posed problems with operator perturbations

    NASA Astrophysics Data System (ADS)

    Plato, Robert

    1999-03-01

    We consider an ill-posed problem Ta = f* in Hilbert spaces and suppose that the linear bounded operator T is approximately available, with a known estimate for the operator perturbation at the solution. As a numerical scheme the CGNR-method is considered, that is, the classical method of conjugate gradients by Hestenes and Stiefel applied to the associated normal equations. Two a posteriori stopping rules are introduced, and convergence results are provided for the corresponding approximations, respectively. As a specific application, a parameter estimation problem is considered.

  16. Shifting the closed-loop spectrum in the optimal linear quadratic regulator problem for hereditary systems

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1985-01-01

    In the optimal linear quadratic regulator problem for finite dimensional systems, the method known as an alpha-shift can be used to produce a closed-loop system whose spectrum lies to the left of some specified vertical line; that is, a closed-loop system with a prescribed degree of stability. This paper treats the extension of the alpha-shift to hereditary systems. As infinite dimensions, the shift can be accomplished by adding alpha times the identity to the open-loop semigroup generator and then solving an optimal regulator problem. However, this approach does not work with a new approximation scheme for hereditary control problems recently developed by Kappel and Salamon. Since this scheme is among the best to date for the numerical solution of the linear regulator problem for hereditary systems, an alternative method for shifting the closed-loop spectrum is needed. An alpha-shift technique that can be used with the Kappel-Salamon approximation scheme is developed. Both the continuous-time and discrete-time problems are considered. A numerical example which demonstrates the feasibility of the method is included.

  17. Shifting the closed-loop spectrum in the optimal linear quadratic regulator problem for hereditary systems

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1987-01-01

    In the optimal linear quadratic regulator problem for finite dimensional systems, the method known as an alpha-shift can be used to produce a closed-loop system whose spectrum lies to the left of some specified vertical line; that is, a closed-loop system with a prescribed degree of stability. This paper treats the extension of the alpha-shift to hereditary systems. As infinite dimensions, the shift can be accomplished by adding alpha times the identity to the open-loop semigroup generator and then solving an optimal regulator problem. However, this approach does not work with a new approximation scheme for hereditary control problems recently developed by Kappel and Salamon. Since this scheme is among the best to date for the numerical solution of the linear regulator problem for hereditary systems, an alternative method for shifting the closed-loop spectrum is needed. An alpha-shift technique that can be used with the Kappel-Salamon approximation scheme is developed. Both the continuous-time and discrete-time problems are considered. A numerical example which demonstrates the feasibility of the method is included.

  18. TOPSIS approach to linear fractional bi-level MODM problem based on fuzzy goal programming

    NASA Astrophysics Data System (ADS)

    Dey, Partha Pratim; Pramanik, Surapati; Giri, Bibhas C.

    2014-07-01

    The objective of this paper is to present a technique for order preference by similarity to ideal solution (TOPSIS) algorithm to linear fractional bi-level multi-objective decision-making problem. TOPSIS is used to yield most appropriate alternative from a finite set of alternatives based upon simultaneous shortest distance from positive ideal solution (PIS) and furthest distance from negative ideal solution (NIS). In the proposed approach, first, the PIS and NIS for both levels are determined and the membership functions of distance functions from PIS and NIS of both levels are formulated. Linearization technique is used in order to transform the non-linear membership functions into equivalent linear membership functions and then normalize them. A possible relaxation on decision for both levels is considered for avoiding decision deadlock. Then fuzzy goal programming models are developed to achieve compromise solution of the problem by minimizing the negative deviational variables. Distance function is used to identify the optimal compromise solution. The paper presents a hybrid model of TOPSIS and fuzzy goal programming. An illustrative numerical example is solved to clarify the proposed approach. Finally, to demonstrate the efficiency of the proposed approach, the obtained solution is compared with the solution derived from existing methods in the literature.

  19. Accelerated solution of non-linear flow problems using Chebyshev iteration polynomial based RK recursions

    SciTech Connect

    Lorber, A.A.; Carey, G.F.; Bova, S.W.; Harle, C.H.

    1996-12-31

    The connection between the solution of linear systems of equations by iterative methods and explicit time stepping techniques is used to accelerate to steady state the solution of ODE systems arising from discretized PDEs which may involve either physical or artificial transient terms. Specifically, a class of Runge-Kutta (RK) time integration schemes with extended stability domains has been used to develop recursion formulas which lead to accelerated iterative performance. The coefficients for the RK schemes are chosen based on the theory of Chebyshev iteration polynomials in conjunction with a local linear stability analysis. We refer to these schemes as Chebyshev Parameterized Runge Kutta (CPRK) methods. CPRK methods of one to four stages are derived as functions of the parameters which describe an ellipse {Epsilon} which the stability domain of the methods is known to contain. Of particular interest are two-stage, first-order CPRK and four-stage, first-order methods. It is found that the former method can be identified with any two-stage RK method through the correct choice of parameters. The latter method is found to have a wide range of stability domains, with a maximum extension of 32 along the real axis. Recursion performance results are presented below for a model linear convection-diffusion problem as well as non-linear fluid flow problems discretized by both finite-difference and finite-element methods.

  20. Legendre-tau approximation for functional differential equations. Part 2: The linear quadratic optimal control problem

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1984-01-01

    The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.

  1. Legendre-tau approximation for functional differential equations. II - The linear quadratic optimal control problem

    NASA Technical Reports Server (NTRS)

    Ito, Kazufumi; Teglas, Russell

    1987-01-01

    The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.

  2. A linear-quadratic-Gaussian control problem with innovations-feedthrough solution

    NASA Technical Reports Server (NTRS)

    Platzman, L. K.; Johnson, T. L.

    1976-01-01

    The structure of the separation-theorem solution to the standard linear-quadratic-Gaussian (LQG) control problem does not involve direct output feedback as a consequence of the form of the performance index. It is shown that the performance index may be generalized in a natural fashion so that the optimal control law involves output feedback or, equivalently, innovations feedthrough (IF). Applications where this formulation may be advantageous are indicated through an examination of properties of the IF control law.

  3. A Conforming Multigrid Method for the Pure Traction Problem of Linear Elasticity: Mixed Formulation

    NASA Technical Reports Server (NTRS)

    Lee, Chang-Ock

    1996-01-01

    A multigrid method using conforming P-1 finite element is developed for the two-dimensional pure traction boundary value problem of linear elasticity. The convergence is uniform even as the material becomes nearly incompressible. A heuristic argument for acceleration of the multigrid method is discussed as well. Numerical results with and without this acceleration as well as performance estimates on a parallel computer are included.

  4. Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report

    SciTech Connect

    Saad, Yousef

    2014-01-16

    The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the

  5. LQR problem of linear discrete time systems with nonnegative state constraints

    NASA Astrophysics Data System (ADS)

    Kostova, S.; Imsland, L.; Ivanov, I.

    2015-10-01

    In the paper the infinite-horizon Linear Quadratic Regulator (LQR) problem of linear discrete time systems with non-negative state constraints is presented. Such kind of constraints on the system determine the class of positive systems. They have big application in many fields like economics, biology, ecology, ICT and others. The standard infinite LQR-optimal state feedback law is used for solving the problem. In order to guarantee the nonnegativity of the system states, we define the admissible set of initial states. It is proven that, for each initial state from this set the nonnegative orthant is invariant set. Two cases are considered, first, when the initial state belongs to the admissible set, and the second, when the initial state does not belong to the admissible set. The procedures for solving the problem are given for two cases. In second case we use a dual-mode approach for solving the problem. The first mode is until the state trajectory enters the admissible set and after that the procedure for the first case is used. The illustrative examples are given for both cases.

  6. Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems

    PubMed Central

    Choi, Sou-Cheng T.; Saunders, Michael A.

    2014-01-01

    We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP. PMID:25328255

  7. Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems.

    PubMed

    Choi, Sou-Cheng T; Saunders, Michael A

    2014-02-01

    We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP. PMID:25328255

  8. A Linear Time Algorithm for the Minimum Spanning Caterpillar Problem for Bounded Treewidth Graphs

    NASA Astrophysics Data System (ADS)

    Dinneen, Michael J.; Khosravani, Masoud

    We consider the Minimum Spanning Caterpillar Problem (MSCP) in a graph where each edge has two costs, spine (path) cost and leaf cost, depending on whether it is used as a spine or a leaf edge. The goal is to find a spanning caterpillar in which the sum of its edge costs is the minimum. We show that the problem has a linear time algorithm when a tree decomposition of the graph is given as part of the input. Despite the fast growing constant factor of the time complexity of our algorithm, it is still practical and efficient for some classes of graphs, such as outerplanar, series-parallel (K 4 minor-free), and Halin graphs. We also briefly explain how one can modify our algorithm to solve the Minimum Spanning Ring Star and the Dual Cost Minimum Spanning Tree Problems.

  9. IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1994-01-01

    IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.

  10. Boundary parametric approximation to the linearized scalar potential magnetostatic field problem

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.

    1984-01-01

    We consider the linearized scalar potential formulation of the magnetostatic field problem in this paper. Our approach involves a reformulation of the continuous problem as a parametric boundary problem. By the introduction of a spherical interface and the use of spherical harmonics, the infinite boundary conditions can also be satisfied in the parametric framework. That is, the field in the exterior of a sphere is expanded in a harmonic series of eigenfunctions for the exterior harmonic problem. The approach is essentially a finite element method coupled with a spectral method via a boundary parametric procedure. The reformulated problem is discretized by finite element techniques which lead to a discrete parametric problem which can be solved by well conditioned iteration involving only the solution of decoupled Neumann type elliptic finite element systems and L/sup 2/ projection onto subspaces of spherical harmonics. Error and stability estimates given show exponential convergence in the degree of the spherical harmonics and optimal order convergence with respect to the finite element approximation for the resulting fields in L/sup 2/. 24 references.

  11. Bohrian Complementarity in the Light of Kantian Teleology

    NASA Astrophysics Data System (ADS)

    Pringe, Hernán

    2014-03-01

    The Kantian influences on Bohr's thought and the relationship between the perspective of complementarity in physics and in biology seem at first sight completely unrelated issues. However, the goal of this work is to show their intimate connection. We shall see that Bohr's views on biology shed light on Kantian elements of his thought, which enables a better understanding of his complementary interpretation of quantum theory. For this purpose, we shall begin by discussing Bohr's views on the analogies concerning the epistemological situation in biology and in physics. Later, we shall compare the Bohrian and the Kantian approaches to the science of life in order to show their close connection. On this basis, we shall finally turn to the issue of complementarity in quantum theory in order to assess what we can learn about the epistemological problems in the quantum realm from a consideration of Kant's views on teleology.

  12. Acceleration of multiple solution of a boundary value problem involving a linear algebraic system

    NASA Astrophysics Data System (ADS)

    Gazizov, Talgat R.; Kuksenko, Sergey P.; Surovtsev, Roman S.

    2016-06-01

    Multiple solution of a boundary value problem that involves a linear algebraic system is considered. New approach to acceleration of the solution is proposed. The approach uses the structure of the linear system matrix. Particularly, location of entries in the right columns and low rows of the matrix, which undergo variation due to the computing in the range of parameters, is used to apply block LU decomposition. Application of the approach is considered on the example of multiple computing of the capacitance matrix by method of moments used in numerical electromagnetics. Expressions for analytic estimation of the acceleration are presented. Results of the numerical experiments for solution of 100 linear systems with matrix orders of 1000, 2000, 3000 and different relations of variated and constant entries of the matrix show that block LU decomposition can be effective for multiple solution of linear systems. The speed up compared to pointwise LU factorization increases (up to 15) for larger number and order of considered systems with lower number of variated entries.

  13. Reintroducing the Concept of Complementarity into Psychology

    PubMed Central

    Wang, Zheng; Busemeyer, Jerome

    2015-01-01

    Central to quantum theory is the concept of complementarity. In this essay, we argue that complementarity is also central to the emerging field of quantum cognition. We review the concept, its historical roots in psychology, and its development in quantum physics and offer examples of how it can be used to understand human cognition. The concept of complementarity provides a valuable and fresh perspective for organizing human cognitive phenomena and for understanding the nature of measurements in psychology. In turn, psychology can provide valuable new evidence and theoretical ideas to enrich this important scientific concept. PMID:26640454

  14. Analytical solution of boundary integral equations for 2-D steady linear wave problems

    NASA Astrophysics Data System (ADS)

    Chuang, J. M.

    2005-10-01

    Based on the Fourier transform, the analytical solution of boundary integral equations formulated for the complex velocity of a 2-D steady linear surface flow is derived. It has been found that before the radiation condition is imposed, free waves appear both far upstream and downstream. In order to cancel the free waves in far upstream regions, the eigensolution of a specific eigenvalue, which satisfies the homogeneous boundary integral equation, is found and superposed to the analytical solution. An example, a submerged vortex, is used to demonstrate the derived analytical solution. Furthermore, an analytical approach to imposing the radiation condition in the numerical solution of boundary integral equations for 2-D steady linear wave problems is proposed.

  15. Variable-permittivity linear inverse problem for the H(sub z)-polarized case

    NASA Technical Reports Server (NTRS)

    Moghaddam, M.; Chew, W. C.

    1993-01-01

    The H(sub z)-polarized inverse problem has rarely been studied before due to the complicated way in which the unknown permittivity appears in the wave equation. This problem is equivalent to the acoustic inverse problem with variable density. We have recently reported the solution to the nonlinear variable-permittivity H(sub z)-polarized inverse problem using the Born iterative method. Here, the linear inverse problem is solved for permittivity (epsilon) and permeability (mu) using a different approach which is an extension of the basic ideas of diffraction tomography (DT). The key to solving this problem is to utilize frequency diversity to obtain the required independent measurements. The receivers are assumed to be in the far field of the object, and plane wave incidence is also assumed. It is assumed that the scatterer is weak, so that the Born approximation can be used to arrive at a relationship between the measured pressure field and two terms related to the spatial Fourier transform of the two unknowns, epsilon and mu. The term involving permeability corresponds to monopole scattering and that for permittivity to dipole scattering. Measurements at several frequencies are used and a least squares problem is solved to reconstruct epsilon and mu. It is observed that the low spatial frequencies in the spectra of epsilon and mu produce inaccuracies in the results. Hence, a regularization method is devised to remove this problem. Several results are shown. Low contrast objects for which the above analysis holds are used to show that good reconstructions are obtained for both permittivity and permeability after regularization is applied.

  16. State-space models' dirty little secrets: even simple linear Gaussian models can have estimation problems.

    PubMed

    Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M; Derocher, Andrew E; Lewis, Mark A; Jonsen, Ian D; Mills Flemming, Joanna

    2016-01-01

    State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results. PMID:27220686

  17. On Linear Instability and Stability of the Rayleigh-Taylor Problem in Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Jiang, Fei; Jiang, Song

    2015-12-01

    We investigate the stabilizing effects of the magnetic fields in the linearized magnetic Rayleigh-Taylor (RT) problem of a nonhomogeneous incompressible viscous magnetohydrodynamic fluid of zero resistivity in the presence of a uniform gravitational field in a three-dimensional bounded domain, in which the velocity of the fluid is non-slip on the boundary. By adapting a modified variational method and careful deriving a priori estimates, we establish a criterion for the instability/stability of the linearized problem around a magnetic RT equilibrium state. In the criterion, we find a new phenomenon that a sufficiently strong horizontal magnetic field has the same stabilizing effect as that of the vertical magnetic field on growth of the magnetic RT instability. In addition, we further study the corresponding compressible case, i.e., the Parker (or magnetic buoyancy) problem, for which the strength of a horizontal magnetic field decreases with height, and also show the stabilizing effect of a sufficiently large magnetic field.

  18. State-space models’ dirty little secrets: even simple linear Gaussian models can have estimation problems

    PubMed Central

    Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna

    2016-01-01

    State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results. PMID:27220686

  19. Using Perturbed QR Factorizations To Solve Linear Least-Squares Problems

    SciTech Connect

    Avron, Haim; Ng, Esmond G.; Toledo, Sivan

    2008-03-21

    We propose and analyze a new tool to help solve sparse linear least-squares problems min{sub x} {parallel}Ax-b{parallel}{sub 2}. Our method is based on a sparse QR factorization of a low-rank perturbation {cflx A} of A. More precisely, we show that the R factor of {cflx A} is an effective preconditioner for the least-squares problem min{sub x} {parallel}Ax-b{parallel}{sub 2}, when solved using LSQR. We propose applications for the new technique. When A is rank deficient we can add rows to ensure that the preconditioner is well-conditioned without column pivoting. When A is sparse except for a few dense rows we can drop these dense rows from A to obtain {cflx A}. Another application is solving an updated or downdated problem. If R is a good preconditioner for the original problem A, it is a good preconditioner for the updated/downdated problem {cflx A}. We can also solve what-if scenarios, where we want to find the solution if a column of the original matrix is changed/removed. We present a spectral theory that analyzes the generalized spectrum of the pencil (A*A,R*R) and analyze the applications.

  20. On the classical solution to the linear-constrained minimum energy problem

    NASA Astrophysics Data System (ADS)

    Boissaux, Marc; Schiltz, Jang

    2012-02-01

    Minimum energy problems involving linear systems with quadratic performance criteria are classical in optimal control theory. The case where controls are constrained is discussed in Athans and Falb (1966) [Athans, M. and Falb, P.L. (1966), Optimal Control: An Introduction to the Theory and Its Applications, New York: McGraw-Hill Book Co.] who obtain a componentwise optimal control expression involving a saturation function expression. We show why the given expression is not generally optimal in the case where the dimension of the control is greater than one and provide a numerical counterexample.

  1. Short communication : a linear assignment approach for the least-squares protein morphing problem.

    SciTech Connect

    Anitescu, M.; Park, S.; Mathematics and Computer Science

    2009-02-01

    This work addresses the computation of free-energy differences between protein conformations by using morphing (i.e., transformation) of a source conformation into a target conformation. To enhance the morphing procedure, we employ permutations of atoms: we seek to find the permutation s that minimizes the mean-square distance traveled by the atoms. Instead of performing this combinatorial search in the space of permutations, we show that the best permutation can be found by solving a linear assignment problem. We demonstrate that the use of such optimal permutations significantly improves the efficiency of the free-energy computation.

  2. A Vector Study of Linearized Supersonic Flow Applications to Nonplanar Problems

    NASA Technical Reports Server (NTRS)

    Martin, John C

    1953-01-01

    A vector study of the partial-differential equation of steady linearized supersonic flow is presented. General expressions which relate the velocity potential in the stream to the conditions on the disturbing surfaces, are derived. In connection with these general expressions the concept of the finite part of an integral is discussed. A discussion of problems dealing with planar bodies is given and the conditions for the solution to be unique are investigated. Problems concerning nonplanar systems are investigated, and methods are derived for the solution of some simple nonplanar bodies. The surface pressure distribution and the damping in roll are found for rolling tails consisting of four, six, and eight rectangular fins for the Mach number range where the region of interference between adjacent fins does not affect the fin tips.

  3. A linear analytical boundary element method (BEM) for 2D homogeneous potential problems

    NASA Astrophysics Data System (ADS)

    Friedrich, Jürgen

    2002-06-01

    The solution of potential problems is not only fundamental for geosciences, but also an essential part of related subjects like electro- and fluid-mechanics. In all fields, solution algorithms are needed that should be as accurate as possible, robust, simple to program, easy to use, fast and small in computer memory. An ideal technique to fulfill these criteria is the boundary element method (BEM) which applies Green's identities to transform volume integrals into boundary integrals. This work describes a linear analytical BEM for 2D homogeneous potential problems that is more robust and precise than numerical methods because it avoids numerical schemes and coordinate transformations. After deriving the solution algorithm, the introduced approach is tested against different benchmarks. Finally, the gained method was incorporated into an existing software program described before in this journal by the same author.

  4. LINPRO: Linear inverse problem library for data contaminated by statistical noise

    NASA Astrophysics Data System (ADS)

    Magierski, Piotr; Wlazłowski, Gabriel

    2012-10-01

    The library LINPRO which provides the solution to the linear inverse problem for data contaminated by a statistical noise is presented. The library makes use of two methods: Maximum Entropy Method and Singular Value Decomposition. As an example it has been applied to perform an analytic continuation of the imaginary time propagator obtained within the Quantum Monte Carlo method. Program summary Program title: LINPRO v1.0. Catalogue identifier: AEMT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: GNU Lesser General Public Licence. No. of lines in distributed program, including test data, etc.: 110620. No. of bytes in distributed program, including test data, etc.: 3208593. Distribution format: tar.gz. Programming language: C++. Computer: LINPRO library should compile on any computing system that has C++ compiler. Operating system: Linux or Unix. Classification: 4.9, 4.12, 4.13. External routines: OPT++: An Object-Oriented Nonlinear Optimization Library [1] (included in the distribution). Nature of problem: LINPRO library solves linear inverse problems with an arbitrary kernel and arbitrary external constraints imposed on the solution. Solution method: LINPRO library implements two complementary methods: Maximum Entropy Method and SVD method. Additional comments: Tested with compilers-GNU Compiler g++, Intel Compiler icpc. Running time: Problem dependent, ranging from seconds to hours. Each of the examples takes less than a minute to run. References: [1] OPT++: An Object-Oriented Nonlinear Optimization Library, https://software.sandia.gov/opt++/.

  5. Modified FGP approach and MATLAB program for solving multi-level linear fractional programming problems

    NASA Astrophysics Data System (ADS)

    Lachhwani, Kailash; Nehra, Suresh

    2015-09-01

    In this paper, we present modified fuzzy goal programming (FGP) approach and generalized MATLAB program for solving multi-level linear fractional programming problems (ML-LFPPs) based on with some major modifications in earlier FGP algorithms. In proposed modified FGP approach, solution preferences by the decision makers at each level are not considered and fuzzy goal for the decision vectors is defined using individual best solutions. The proposed modified algorithm as well as MATLAB program simplifies the earlier algorithm on ML-LFPP by eliminating solution preferences by the decision makers at each level, thereby avoiding difficulties associate with multi-level programming problems and decision deadlock situation. The proposed modified technique is simple, efficient and requires less computational efforts in comparison of earlier FGP techniques. Also, the proposed coding of generalized MATLAB program based on this modified approach for solving ML-LFPPs is the unique programming tool toward dealing with such complex mathematical problems with MATLAB. This software based program is useful and user can directly obtain compromise optimal solution of ML-LFPPs with it. The aim of this paper is to present modified FGP technique and generalized MATLAB program to obtain compromise optimal solution of ML-LFP problems in simple and efficient manner. A comparative analysis is also carried out with numerical example in order to show efficiency of proposed modified approach and to demonstrate functionality of MATLAB program.

  6. Linear stability of the Couette flow of a vibrationally excited gas. 2. viscous problem

    NASA Astrophysics Data System (ADS)

    Grigor'ev, Yu. N.; Ershov, I. V.

    2016-03-01

    Based on the linear theory, stability of viscous disturbances in a supersonic plane Couette flow of a vibrationally excited gas described by a system of linearized equations of two-temperature gas dynamics including shear and bulk viscosity is studied. It is demonstrated that two sets are identified in the spectrum of the problem of stability of plane waves, similar to the case of a perfect gas. One set consists of viscous acoustic modes, which asymptotically converge to even and odd inviscid acoustic modes at high Reynolds numbers. The eigenvalues from the other set have no asymptotic relationship with the inviscid problem and are characterized by large damping decrements. Two most unstable viscous acoustic modes (I and II) are identified; the limits of these modes were considered previously in the inviscid approximation. It is shown that there are domains in the space of parameters for both modes, where the presence of viscosity induces appreciable destabilization of the flow. Moreover, the growth rates of disturbances are appreciably greater than the corresponding values for the inviscid flow, while thermal excitation in the entire considered range of parameters increases the stability of the viscous flow. For a vibrationally excited gas, the critical Reynolds number as a function of the thermal nonequilibrium degree is found to be greater by 12% than for a perfect gas.

  7. Complementarity relations for quantum coherence

    NASA Astrophysics Data System (ADS)

    Cheng, Shuming; Hall, Michael J. W.

    2015-10-01

    Various measures have been suggested recently for quantifying the coherence of a quantum state with respect to a given basis. We first use two of these, the l1-norm and relative entropy measures, to investigate tradeoffs between the coherences of mutually unbiased bases. Results include relations between coherence, uncertainty, and purity; tight general bounds restricting the coherences of mutually unbiased bases; and an exact complementarity relation for qubit coherences. We further define the average coherence of a quantum state. For the l1-norm measure this is related to a natural "coherence radius" for the state and leads to a conjecture for an l2-norm measure of coherence. For relative entropy the average coherence is determined by the difference between the von Neumann entropy and the quantum subentropy of the state and leads to upper bounds for the latter quantity. Finally, we point out that the relative entropy of coherence is a special case of G-asymmetry, which immediately yields several operational interpretations in contexts as diverse as frame alignment, quantum communication, and metrology, and suggests generalizing the property of quantum coherence to arbitrary groups of physical transformations.

  8. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  9. Statistical mechanical analysis of linear programming relaxation for combinatorial optimization problems.

    PubMed

    Takabe, Satoshi; Hukushima, Koji

    2016-05-01

    Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken. PMID:27301006

  10. Statistical mechanical analysis of linear programming relaxation for combinatorial optimization problems

    NASA Astrophysics Data System (ADS)

    Takabe, Satoshi; Hukushima, Koji

    2016-05-01

    Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.

  11. An improved exploratory search technique for pure integer linear programming problems

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1990-01-01

    The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.

  12. The initial value problem for linearized gravitational perturbations of the Schwarzschild naked singularity

    NASA Astrophysics Data System (ADS)

    Dotti, Gustavo; Gleiser, Reinaldo J.

    2009-11-01

    The coupled equations for the scalar modes of the linearized Einstein equations around Schwarzschild's spacetime were reduced by Zerilli to a (1+1) wave equation \\partial ^2 \\Psi _z / \\partial t^2 + {\\cal H} \\Psi _z =0 , where {\\cal H} = -\\partial ^2 / \\partial x^2 + V(x) is the Zerilli 'Hamiltonian' and x is the tortoise radial coordinate. From its definition, for smooth metric perturbations the field Ψz is singular at rs = -6M/(ell - 1)(ell +2), with ell being the mode harmonic number. The equation Ψz obeys is also singular, since V has a second-order pole at rs. This is irrelevant to the black hole exterior stability problem, where r > 2M > 0, and rs < 0, but it introduces a non-trivial problem in the naked singular case where M < 0, then rs > 0, and the singularity appears in the relevant range of r (0 < r < ∞). We solve this problem by developing a new approach to the evolution of the even mode, based on a new gauge invariant function, \\hat{\\Psi} , that is a regular function of the metric perturbation for any value of M. The relation of \\hat{\\Psi} to Ψz is provided by an intertwiner operator. The spatial pieces of the (1 + 1) wave equations that \\hat{\\Psi} and Ψz obey are related as a supersymmetric pair of quantum Hamiltonians {\\cal H} and \\hat{\\cal H} . For M<0, \\hat{\\cal H} has a regular potential and a unique self-adjoint extension in a domain {\\cal D} defined by a physically motivated boundary condition at r = 0. This allows us to address the issue of evolution of gravitational perturbations in this non-globally hyperbolic background. This formulation is used to complete the proof of the linear instability of the Schwarzschild naked singularity, by showing that a previously found unstable mode belongs to a complete basis of \\hat{\\cal H} in {\\cal D} , and thus is excitable by generic initial data. This is further illustrated by numerically solving the linearized equations for suitably chosen initial data.

  13. Solution of a singularly perturbed Cauchy problem for linear systems of ordinary differential equations by the method of spectral decomposition

    NASA Astrophysics Data System (ADS)

    Shaldanbayev, Amir; Shomanbayeva, Manat; Kopzhassarova, Asylzat

    2016-08-01

    This paper proposes a fundamentally new method of investigation of a singularly perturbed Cauchy problem for a linear system of ordinary differential equations based on the spectral theory of equations with deviating argument.

  14. First-order system least squares for the pure traction problem in planar linear elasticity

    SciTech Connect

    Cai, Z.; Manteuffel, T.; McCormick, S.; Parter, S.

    1996-12-31

    This talk will develop two first-order system least squares (FOSLS) approaches for the solution of the pure traction problem in planar linear elasticity. Both are two-stage algorithms that first solve for the gradients of displacement, then for the displacement itself. One approach, which uses L{sup 2} norms to define the FOSLS functional, is shown under certain H{sup 2} regularity assumptions to admit optimal H{sup 1}-like performance for standard finite element discretization and standard multigrid solution methods that is uniform in the Poisson ratio for all variables. The second approach, which is based on H{sup -1} norms, is shown under general assumptions to admit optimal uniform performance for displacement flux in an L{sup 2} norm and for displacement in an H{sup 1} norm. These methods do not degrade as other methods generally do when the material properties approach the incompressible limit.

  15. An exact solution to a certain non-linear random vibration problem

    NASA Astrophysics Data System (ADS)

    Dimentberg, M. F.

    A single-degree-of-freedom system with a special type of nonlinear damping and both external and parametric white-noise excitations is considered. For the special case, when the intensities of coordinates and velocity modulation satisfy a certain condition an exact analytical solution is obtained to the corresponding stationary Fokker-Planck-Kolmogorov equation yielding an expression for joint probability density of coordinate and velocity. This solution is analyzed particularly in connection with stochastic stability problem for the corresponding linear system; certain implications are illustrated for the system, which is stable with respect to probability but unstable in the mean square. The solution obtained may be used to check different approximate methods for analysis of systems with randomly varying parameters.

  16. Solving the Linear Balance Equation on the Globe as a Generalized Inverse Problem

    NASA Technical Reports Server (NTRS)

    Lu, Huei-Iin; Robertson, Franklin R.

    1999-01-01

    A generalized (pseudo) inverse technique was developed to facilitate a better understanding of the numerical effects of tropical singularities inherent in the spectral linear balance equation (LBE). Depending upon the truncation, various levels of determinancy are manifest. The traditional fully-determined (FD) systems give rise to a strong response, while the under-determined (UD) systems yield a weak response to the tropical singularities. The over-determined (OD) systems result in a modest response and a large residual in the tropics. The FD and OD systems can be alternatively solved by the iterative method. Differences in the solutions of an UD system exist between the inverse technique and the iterative method owing to the non- uniqueness of the problem. A realistic balanced wind was obtained by solving the principal components of the spectral LBE in terms of vorticity in an intermediate resolution. Improved solutions were achieved by including the singular-component solutions which best fit the observed wind data.

  17. A linear stability analysis for nonlinear, grey, thermal radiative transfer problems

    SciTech Connect

    Wollaber, Allan B.; Larsen, Edward W.

    2011-02-20

    We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used 'Implicit Monte Carlo' (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or 'Semi-Analog Monte Carlo' (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if {alpha}, the IMC time-discretization parameter, satisfies 0.5 < {alpha} {<=} 1. This is consistent with conventional wisdom. However, we also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.

  18. Stabilizing inverse problems by internal data. II: non-local internal data and generic linearized uniqueness

    NASA Astrophysics Data System (ADS)

    Kuchment, Peter; Steinhauer, Dustin

    2015-12-01

    In the previous paper (Kuchment and Steinhauer in Inverse Probl 28(8):084007, 2012), the authors introduced a simple procedure that allows one to detect whether and explain why internal information arising in several novel coupled physics (hybrid) imaging modalities could turn extremely unstable techniques, such as optical tomography or electrical impedance tomography, into stable, good-resolution procedures. It was shown that in all cases of interest, the Fréchet derivative of the forward mapping is a pseudo-differential operator with an explicitly computable principal symbol. If one can set up the imaging procedure in such a way that the symbol is elliptic, this would indicate that the problem was stabilized. In the cases when the symbol is not elliptic, the technique suggests how to change the procedure (e.g., by adding extra measurements) to achieve ellipticity. In this article, we consider the situation arising in acousto-optical tomography (also called ultrasound modulated optical tomography), where the internal data available involves the Green's function, and thus depends globally on the unknown parameter(s) of the equation and its solution. It is shown that the technique of (Kuchment and Steinhauer in Inverse Probl 28(8):084007, 2012) can be successfully adopted to this situation as well. A significant part of the article is devoted to results on generic uniqueness for the linearized problem in a variety of situations, including those arising in acousto-electric and quantitative photoacoustic tomography.

  19. Interval analysis approach to rank determination in linear least squares problems

    SciTech Connect

    Manteuffel, T.A.

    1980-06-01

    The linear least-squares problem Ax approx. = b has a unique solution only if the matrix A has full column rank. Numerical rank determination is difficult, especially in the presence of uncertainties in the elements of A. This paper proposes an interval analysis approach. A set of matrices A/sup I/ is defined that contains all possible perturbations of A due to uncertainties; A/sup I/ is said to be rank deficient if any member of A/sup I/ is rank deficient. A modification to the Q-R decomposition method of solution of the least-squares problem allows a determination of the rank of A/sup I/ and a partial interval analysis of the solution vector x. This procedure requires the computation of R/sup -1/. Another modification is proposed which determines the rank of A/sup I/ without computing R/sup -1/. The additional computational effort is O(N/sup 2/), where N is the column dimension of A. 4 figures.

  20. Application and flight test of linearizing transformations using measurement feedback to the nonlinear control problem

    NASA Technical Reports Server (NTRS)

    Antoniewicz, Robert F.; Duke, Eugene L.; Menon, P. K. A.

    1991-01-01

    The design of nonlinear controllers has relied on the use of detailed aerodynamic and engine models that must be associated with the control law in the flight system implementation. Many of these controllers were applied to vehicle flight path control problems and have attempted to combine both inner- and outer-loop control functions in a single controller. An approach to the nonlinear trajectory control problem is presented. This approach uses linearizing transformations with measurement feedback to eliminate the need for detailed aircraft models in outer-loop control applications. By applying this approach and separating the inner-loop and outer-loop functions two things were achieved: (1) the need for incorporating detailed aerodynamic models in the controller is obviated; and (2) the controller is more easily incorporated into existing aircraft flight control systems. An implementation of the controller is discussed, and this controller is tested on a six degree-of-freedom F-15 simulation and in flight on an F-15 aircraft. Simulation data are presented which validates this approach over a large portion of the F-15 flight envelope. Proof of this concept is provided by flight-test data that closely matches simulation results. Flight-test data are also presented.

  1. A study of the use of linear programming techniques to improve the performance in design optimization problems

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.

  2. Complementarity effects on tree growth are contingent on tree size and climatic conditions across Europe

    PubMed Central

    Madrigal-González, Jaime; Ruiz-Benito, Paloma; Ratcliffe, Sophia; Calatayud, Joaquín; Kändler, Gerald; Lehtonen, Aleksi; Dahlgren, Jonas; Wirth, Christian; Zavala, Miguel A.

    2016-01-01

    Neglecting tree size and stand structure dynamics might bias the interpretation of the diversity-productivity relationship in forests. Here we show evidence that complementarity is contingent on tree size across large-scale climatic gradients in Europe. We compiled growth data of the 14 most dominant tree species in 32,628 permanent plots covering boreal, temperate and Mediterranean forest biomes. Niche complementarity is expected to result in significant growth increments of trees surrounded by a larger proportion of functionally dissimilar neighbours. Functional dissimilarity at the tree level was assessed using four functional types: i.e. broad-leaved deciduous, broad-leaved evergreen, needle-leaved deciduous and needle-leaved evergreen. Using Linear Mixed Models we show that, complementarity effects depend on tree size along an energy availability gradient across Europe. Specifically: (i) complementarity effects at low and intermediate positions of the gradient (coldest-temperate areas) were stronger for small than for large trees; (ii) in contrast, at the upper end of the gradient (warmer regions), complementarity is more widespread in larger than smaller trees, which in turn showed negative growth responses to increased functional dissimilarity. Our findings suggest that the outcome of species mixing on stand productivity might critically depend on individual size distribution structure along gradients of environmental variation. PMID:27571971

  3. Complementarity effects on tree growth are contingent on tree size and climatic conditions across Europe.

    PubMed

    Madrigal-González, Jaime; Ruiz-Benito, Paloma; Ratcliffe, Sophia; Calatayud, Joaquín; Kändler, Gerald; Lehtonen, Aleksi; Dahlgren, Jonas; Wirth, Christian; Zavala, Miguel A

    2016-01-01

    Neglecting tree size and stand structure dynamics might bias the interpretation of the diversity-productivity relationship in forests. Here we show evidence that complementarity is contingent on tree size across large-scale climatic gradients in Europe. We compiled growth data of the 14 most dominant tree species in 32,628 permanent plots covering boreal, temperate and Mediterranean forest biomes. Niche complementarity is expected to result in significant growth increments of trees surrounded by a larger proportion of functionally dissimilar neighbours. Functional dissimilarity at the tree level was assessed using four functional types: i.e. broad-leaved deciduous, broad-leaved evergreen, needle-leaved deciduous and needle-leaved evergreen. Using Linear Mixed Models we show that, complementarity effects depend on tree size along an energy availability gradient across Europe. Specifically: (i) complementarity effects at low and intermediate positions of the gradient (coldest-temperate areas) were stronger for small than for large trees; (ii) in contrast, at the upper end of the gradient (warmer regions), complementarity is more widespread in larger than smaller trees, which in turn showed negative growth responses to increased functional dissimilarity. Our findings suggest that the outcome of species mixing on stand productivity might critically depend on individual size distribution structure along gradients of environmental variation. PMID:27571971

  4. Equivalence of linear stabilities of elliptic triangle solutions of the planar charged and classical three-body problems

    NASA Astrophysics Data System (ADS)

    Zhou, Qinglong; Long, Yiming

    2015-06-01

    In this paper, we prove that the linearized system of elliptic triangle homographic solution of planar charged three-body problem can be transformed to that of the elliptic equilateral triangle solution of the planar classical three-body problem. Consequently, the results of Martínez, Samà and Simó (2006) [15] and results of Hu, Long and Sun (2014) [6] can be applied to these solutions of the charged three-body problem to get their linear stability.

  5. A semi-intrusive deterministic approach to uncertainty quantification in non-linear fluid flow problems

    NASA Astrophysics Data System (ADS)

    Abgrall, Rémi; Congedo, Pietro Marco

    2013-02-01

    This paper deals with the formulation of a semi-intrusive (SI) method allowing the computation of statistics of linear and non linear PDEs solutions. This method shows to be very efficient to deal with probability density function of whatsoever form, long-term integration and discontinuities in stochastic space. Given a stochastic PDE where randomness is defined on Ω, starting from (i) a description of the solution in term of a space variables, (ii) a numerical scheme defined for any event ω∈Ω and (iii) a (family) of random variables that may be correlated, the solution is numerically described by its conditional expectancies of point values or cell averages and its evaluation constructed from the deterministic scheme. One of the tools is a tessellation of the random space as in finite volume methods for the space variables. Then, using these conditional expectancies and the geometrical description of the tessellation, a piecewise polynomial approximation in the random variables is computed using a reconstruction method that is standard for high order finite volume space, except that the measure is no longer the standard Lebesgue measure but the probability measure. This reconstruction is then used to formulate a scheme on the numerical approximation of the solution from the deterministic scheme. This new approach is said semi-intrusive because it requires only a limited amount of modification in a deterministic solver to quantify uncertainty on the state when the solver includes uncertain variables. The effectiveness of this method is illustrated for a modified version of Kraichnan-Orszag three-mode problem where a discontinuous pdf is associated to the stochastic variable, and for a nozzle flow with shocks. The results have been analyzed in terms of accuracy and probability measure flexibility. Finally, the importance of the probabilistic reconstruction in the stochastic space is shown up on an example where the exact solution is computable, the viscous

  6. A semi-intrusive deterministic approach to uncertainty quantification in non-linear fluid flow problems

    SciTech Connect

    Abgrall, Rémi; Congedo, Pietro Marco

    2013-02-15

    This paper deals with the formulation of a semi-intrusive (SI) method allowing the computation of statistics of linear and non linear PDEs solutions. This method shows to be very efficient to deal with probability density function of whatsoever form, long-term integration and discontinuities in stochastic space. Given a stochastic PDE where randomness is defined on Ω, starting from (i) a description of the solution in term of a space variables, (ii) a numerical scheme defined for any event ω∈Ω and (iii) a (family) of random variables that may be correlated, the solution is numerically described by its conditional expectancies of point values or cell averages and its evaluation constructed from the deterministic scheme. One of the tools is a tessellation of the random space as in finite volume methods for the space variables. Then, using these conditional expectancies and the geometrical description of the tessellation, a piecewise polynomial approximation in the random variables is computed using a reconstruction method that is standard for high order finite volume space, except that the measure is no longer the standard Lebesgue measure but the probability measure. This reconstruction is then used to formulate a scheme on the numerical approximation of the solution from the deterministic scheme. This new approach is said semi-intrusive because it requires only a limited amount of modification in a deterministic solver to quantify uncertainty on the state when the solver includes uncertain variables. The effectiveness of this method is illustrated for a modified version of Kraichnan–Orszag three-mode problem where a discontinuous pdf is associated to the stochastic variable, and for a nozzle flow with shocks. The results have been analyzed in terms of accuracy and probability measure flexibility. Finally, the importance of the probabilistic reconstruction in the stochastic space is shown up on an example where the exact solution is computable, the viscous

  7. Self-complementarity of messenger RNA's of periodic proteins

    NASA Technical Reports Server (NTRS)

    Ycas, M.

    1973-01-01

    It is shown that the mRNA's of three periodic proteins, collagen, keratin and freezing point depressing glycoproteins show a marked degree of self-complementarity. The possible origin of this self-complementarity is discussed.

  8. A stabilized complementarity formulation for nonlinear analysis of 3D bimodular materials

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Zhang, H. W.; Wu, J.; Yan, B.

    2016-06-01

    Bi-modulus materials with different mechanical responses in tension and compression are often found in civil, composite, and biological engineering. Numerical analysis of bimodular materials is strongly nonlinear and convergence is usually a problem for traditional iterative schemes. This paper aims to develop a stabilized computational method for nonlinear analysis of 3D bimodular materials. Based on the parametric variational principle, a unified constitutive equation of 3D bimodular materials is proposed, which allows the eight principal stress states to be indicated by three parametric variables introduced in the principal stress directions. The original problem is transformed into a standard linear complementarity problem (LCP) by the parametric virtual work principle and a quadratic programming algorithm is developed by solving the LCP with the classic Lemke's algorithm. Update of elasticity and stiffness matrices is avoided and, thus, the proposed algorithm shows an excellent convergence behavior compared with traditional iterative schemes. Numerical examples show that the proposed method is valid and can accurately analyze mechanical responses of 3D bimodular materials. Also, stability of the algorithm is greatly improved.

  9. A stabilized complementarity formulation for nonlinear analysis of 3D bimodular materials

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Zhang, H. W.; Wu, J.; Yan, B.

    2015-10-01

    Bi-modulus materials with different mechanical responses in tension and compression are often found in civil, composite, and biological engineering. Numerical analysis of bimodular materials is strongly nonlinear and convergence is usually a problem for traditional iterative schemes. This paper aims to develop a stabilized computational method for nonlinear analysis of 3D bimodular materials. Based on the parametric variational principle, a unified constitutive equation of 3D bimodular materials is proposed, which allows the eight principal stress states to be indicated by three parametric variables introduced in the principal stress directions. The original problem is transformed into a standard linear complementarity problem (LCP) by the parametric virtual work principle and a quadratic programming algorithm is developed by solving the LCP with the classic Lemke's algorithm. Update of elasticity and stiffness matrices is avoided and, thus, the proposed algorithm shows an excellent convergence behavior compared with traditional iterative schemes. Numerical examples show that the proposed method is valid and can accurately analyze mechanical responses of 3D bimodular materials. Also, stability of the algorithm is greatly improved.

  10. Taming the non-linearity problem in GPR full-waveform inversion for high contrast media

    NASA Astrophysics Data System (ADS)

    Meles, Giovanni; Greenhalgh, Stewart; van der Kruk, Jan; Green, Alan; Maurer, Hansruedi

    2012-03-01

    We present a new algorithm for the inversion of full-waveform ground-penetrating radar (GPR) data. It is designed to tame the non-linearity issue that afflicts inverse scattering problems, especially in high contrast media. We first investigate the limitations of current full-waveform time-domain inversion schemes for GPR data and then introduce a much-improved approach based on a combined frequency-time-domain analysis. We show by means of several synthetic tests and theoretical considerations that local minima trapping (common in full bandwidth time-domain inversion) can be avoided by starting the inversion with only the low frequency content of the data. Resolution associated with the high frequencies can then be achieved by progressively expanding to wider bandwidths as the iterations proceed. Although based on a frequency analysis of the data, the new method is entirely implemented by means of a time-domain forward solver, thus combining the benefits of both frequency-domain (low frequency inversion conveys stability and avoids convergence to a local minimum; whereas high frequency inversion conveys resolution) and time-domain methods (simplicity of interpretation and recognition of events; ready availability of FDTD simulation tools).

  11. Taming the non-linearity problem in GPR full-waveform inversion for high contrast media

    NASA Astrophysics Data System (ADS)

    Meles, Giovanni; Greenhalgh, Stewart; van der Kruk, Jan; Green, Alan; Maurer, Hansruedi

    2011-02-01

    We present a new algorithm for the inversion of full-waveform ground-penetrating radar (GPR) data. It is designed to tame the non-linearity issue that afflicts inverse scattering problems, especially in high contrast media. We first investigate the limitations of current full-waveform time-domain inversion schemes for GPR data and then introduce a much-improved approach based on a combined frequency-time-domain analysis. We show by means of several synthetic tests and theoretical considerations that local minima trapping (common in full bandwidth time-domain inversion) can be avoided by starting the inversion with only the low frequency content of the data. Resolution associated with the high frequencies can then be achieved by progressively expanding to wider bandwidths as the iterations proceed. Although based on a frequency analysis of the data, the new method is entirely implemented by means of a time-domain forward solver, thus combining the benefits of both frequency-domain (low frequency inversion conveys stability and avoids convergence to a local minimum; whereas high frequency inversion conveys resolution) and time-domain methods (simplicity of interpretation and recognition of events; ready availability of FDTD simulation tools).

  12. Evaluation of parallel direct sparse linear solvers in electromagnetic geophysical problems

    NASA Astrophysics Data System (ADS)

    Puzyrev, Vladimir; Koric, Seid; Wilkin, Scott

    2016-04-01

    High performance computing is absolutely necessary for large-scale geophysical simulations. In order to obtain a realistic image of a geologically complex area, industrial surveys collect vast amounts of data making the computational cost extremely high for the subsequent simulations. A major computational bottleneck of modeling and inversion algorithms is solving the large sparse systems of linear ill-conditioned equations in complex domains with multiple right hand sides. Recently, parallel direct solvers have been successfully applied to multi-source seismic and electromagnetic problems. These methods are robust and exhibit good performance, but often require large amounts of memory and have limited scalability. In this paper, we evaluate modern direct solvers on large-scale modeling examples that previously were considered unachievable with these methods. Performance and scalability tests utilizing up to 65,536 cores on the Blue Waters supercomputer clearly illustrate the robustness, efficiency and competitiveness of direct solvers compared to iterative techniques. Wide use of direct methods utilizing modern parallel architectures will allow modeling tools to accurately support multi-source surveys and 3D data acquisition geometries, thus promoting a more efficient use of the electromagnetic methods in geophysics.

  13. A Mixed Integer Linear Program for Solving a Multiple Route Taxi Scheduling Problem

    NASA Technical Reports Server (NTRS)

    Montoya, Justin Vincent; Wood, Zachary Paul; Rathinam, Sivakumar; Malik, Waqar Ahmad

    2010-01-01

    Aircraft movements on taxiways at busy airports often create bottlenecks. This paper introduces a mixed integer linear program to solve a Multiple Route Aircraft Taxi Scheduling Problem. The outputs of the model are in the form of optimal taxi schedules, which include routing decisions for taxiing aircraft. The model extends an existing single route formulation to include routing decisions. An efficient comparison framework compares the multi-route formulation and the single route formulation. The multi-route model is exercised for east side airport surface traffic at Dallas/Fort Worth International Airport to determine if any arrival taxi time savings can be achieved by allowing arrivals to have two taxi routes: a route that crosses an active departure runway and a perimeter route that avoids the crossing. Results indicate that the multi-route formulation yields reduced arrival taxi times over the single route formulation only when a perimeter taxiway is used. In conditions where the departure aircraft are given an optimal and fixed takeoff sequence, accumulative arrival taxi time savings in the multi-route formulation can be as high as 3.6 hours more than the single route formulation. If the departure sequence is not optimal, the multi-route formulation results in less taxi time savings made over the single route formulation, but the average arrival taxi time is significantly decreased.

  14. On the Numerical Formulation of Parametric Linear Fractional Transformation (LFT) Uncertainty Models for Multivariate Matrix Polynomial Problems

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.

    1998-01-01

    Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.

  15. Complementarity of real-time neutron and synchrotron radiation structural investigations in molecular biology

    SciTech Connect

    Aksenov, V. L.; Kiselev, M. A.

    2010-12-15

    General problems of the complementarity of different physical methods and specific features of the interaction between neutron and matter and neutron diffraction with respect to the time of flight are discussed. The results of studying the kinetics of structural changes in lipid membranes under hydration and self-assembly of the lipid bilayer in the presence of a detergent are reported. The possibilities of the complementarity of neutron diffraction and X-ray synchrotron radiation and developing a free-electron laser are noted.

  16. A class of stochastic optimization problems with one quadratic & several linear objective functions and extended portfolio selection model

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Li, Jun

    2002-09-01

    In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.

  17. A nodal inverse problem for a quasi-linear ordinary differential equation in the half-line

    NASA Astrophysics Data System (ADS)

    Pinasco, Juan P.; Scarola, Cristian

    2016-07-01

    In this paper we study an inverse problem for a quasi-linear ordinary differential equation with a monotonic weight in the half-line. First, we find the asymptotic behavior of the singular eigenvalues, and we obtain a Weyl-type asymptotics imposing an appropriate integrability condition on the weight. Then, we investigate the inverse problem of recovering the coefficients from nodal data. We show that any dense subset of nodes of the eigenfunctions is enough to recover the weight.

  18. A Family of Symmetric Linear Multistep Methods for the Numerical Solution of the Schroedinger Equation and Related Problems

    SciTech Connect

    Anastassi, Z. A.; Simos, T. E.

    2010-09-30

    We develop a new family of explicit symmetric linear multistep methods for the efficient numerical solution of the Schroedinger equation and related problems with oscillatory solution. The new methods are trigonometrically fitted and have improved intervals of periodicity as compared to the corresponding classical method with constant coefficients and other methods from the literature. We also apply the methods along with other known methods to real periodic problems, in order to measure their efficiency.

  19. Non-linear stability of L 4 in the restricted problem when the primaries are finite straight segments under resonances

    NASA Astrophysics Data System (ADS)

    Jain, Ruchika; Sinha, Deepa

    2014-09-01

    The non-linear stability of L 4 in the restricted three-body problem when both primaries are finite straight segments in the presence of third and fourth order resonances has been investigated. Markeev's theorem (Markeev in Libration Points in Celestial Mechanics and Astrodynamics, 1978) is used to examine the non-linear stability for the resonance cases 2:1 and 3:1. It is found that the non-linear stability of L 4 depends on the lengths of the segments in both resonance cases. It is also found that the range of stability increases when compared with the classical restricted problem. The results have been applied in the following asteroids systems: (i) 216 Kleopatra-951 Gaspara, (ii) 9 Metis-433 Eros, (iii) 22 Kalliope-243 Ida.

  20. Large-scale learning of structure-activity relationships using a linear support vector machine and problem-specific metrics.

    PubMed

    Hinselmann, Georg; Rosenbaum, Lars; Jahn, Andreas; Fechner, Nikolas; Ostermann, Claude; Zell, Andreas

    2011-02-28

    The goal of this study was to adapt a recently proposed linear large-scale support vector machine to large-scale binary cheminformatics classification problems and to assess its performance on various benchmarks using virtual screening performance measures. We extended the large-scale linear support vector machine library LIBLINEAR with state-of-the-art virtual high-throughput screening metrics to train classifiers on whole large and unbalanced data sets. The formulation of this linear support machine has an excellent performance if applied to high-dimensional sparse feature vectors. An additional advantage is the average linear complexity in the number of non-zero features of a prediction. Nevertheless, the approach assumes that a problem is linearly separable. Therefore, we conducted an extensive benchmarking to evaluate the performance on large-scale problems up to a size of 175000 samples. To examine the virtual screening performance, we determined the chemotype clusters using Feature Trees and integrated this information to compute weighted AUC-based performance measures and a leave-cluster-out cross-validation. We also considered the BEDROC score, a metric that was suggested to tackle the early enrichment problem. The performance on each problem was evaluated by a nested cross-validation and a nested leave-cluster-out cross-validation. We compared LIBLINEAR against a Naïve Bayes classifier, a random decision forest classifier, and a maximum similarity ranking approach. These reference approaches were outperformed in a direct comparison by LIBLINEAR. A comparison to literature results showed that the LIBLINEAR performance is competitive but without achieving results as good as the top-ranked nonlinear machines on these benchmarks. However, considering the overall convincing performance and computation time of the large-scale support vector machine, the approach provides an excellent alternative to established large-scale classification approaches. PMID

  1. Linear regression models, least-squares problems, normal equations, and stopping criteria for the conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Arioli, M.; Gratton, S.

    2012-11-01

    Minimum-variance unbiased estimates for linear regression models can be obtained by solving least-squares problems. The conjugate gradient method can be successfully used in solving the symmetric and positive definite normal equations obtained from these least-squares problems. Taking into account the results of Golub and Meurant (1997, 2009) [10,11], Hestenes and Stiefel (1952) [17], and Strakoš and Tichý (2002) [16], which make it possible to approximate the energy norm of the error during the conjugate gradient iterative process, we adapt the stopping criterion introduced by Arioli (2005) [18] to the normal equations taking into account the statistical properties of the underpinning linear regression problem. Moreover, we show how the energy norm of the error is linked to the χ2-distribution and to the Fisher-Snedecor distribution. Finally, we present the results of several numerical tests that experimentally validate the effectiveness of our stopping criteria.

  2. Skill complementarity enhances heterophily in collaboration networks

    NASA Astrophysics Data System (ADS)

    Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene

    2016-01-01

    Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems.

  3. Skill complementarity enhances heterophily in collaboration networks

    PubMed Central

    Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene

    2016-01-01

    Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems. PMID:26743687

  4. Skill complementarity enhances heterophily in collaboration networks.

    PubMed

    Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H Eugene

    2016-01-01

    Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems. PMID:26743687

  5. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  6. Horizons of description: Black holes and complementarity

    NASA Astrophysics Data System (ADS)

    Bokulich, Peter Joshua Martin

    Niels Bohr famously argued that a consistent understanding of quantum mechanics requires a new epistemic framework, which he named complementarity . This position asserts that even in the context of quantum theory, classical concepts must be used to understand and communicate measurement results. The apparent conflict between certain classical descriptions is avoided by recognizing that their application now crucially depends on the measurement context. Recently it has been argued that a new form of complementarity can provide a solution to the so-called information loss paradox. Stephen Hawking argues that the evolution of black holes cannot be described by standard unitary quantum evolution, because such evolution always preserves information, while the evaporation of a black hole will imply that any information that fell into it is irrevocably lost---hence a "paradox." Some researchers in quantum gravity have argued that this paradox can be resolved if one interprets certain seemingly incompatible descriptions of events around black holes as instead being complementary. In this dissertation I assess the extent to which this black hole complementarity can be undergirded by Bohr's account of the limitations of classical concepts. I begin by offering an interpretation of Bohr's complementarity and the role that it plays in his philosophy of quantum theory. After clarifying the nature of classical concepts, I offer an account of the limitations these concepts face, and argue that Bohr's appeal to disturbance is best understood as referring to these conceptual limits. Following preparatory chapters on issues in quantum field theory and black hole mechanics, I offer an analysis of the information loss paradox and various responses to it. I consider the three most prominent accounts of black hole complementarity and argue that they fail to offer sufficient justification for the proposed incompatibility between descriptions. The lesson that emerges from this

  7. Influence of geometrical parameters on the linear stability of a Bénard-Marangoni problem.

    PubMed

    Hoyas, S; Fajardo, P; Pérez-Quiles, M J

    2016-04-01

    A linear stability analysis of a thin liquid film flowing over a plate is performed. The analysis is performed in an annular domain when momentum diffusivity and thermal diffusivity are comparable (relatively low Prandtl number, Pr=1.2). The influence of the aspect ratio (Γ) and gravity, through the Bond number (Bo), in the linear stability of the flow are analyzed together. Two different regions in the Γ-Bo plane have been identified. In the first one the basic state presents a linear regime (in which the temperature gradient does not change sign with r). In the second one, the flow presents a nonlinear regime, also called return flow. A great diversity of bifurcations have been found just by changing the domain depth d. The results obtained in this work are in agreement with some reported experiments, and give a deeper insight into the effect of physical parameters on bifurcations. PMID:27176388

  8. Influence of geometrical parameters on the linear stability of a Bénard-Marangoni problem

    NASA Astrophysics Data System (ADS)

    Hoyas, S.; Fajardo, P.; Pérez-Quiles, M. J.

    2016-04-01

    A linear stability analysis of a thin liquid film flowing over a plate is performed. The analysis is performed in an annular domain when momentum diffusivity and thermal diffusivity are comparable (relatively low Prandtl number, Pr =1.2 ). The influence of the aspect ratio (Γ ) and gravity, through the Bond number (Bo ), in the linear stability of the flow are analyzed together. Two different regions in the Γ -Bo plane have been identified. In the first one the basic state presents a linear regime (in which the temperature gradient does not change sign with r ). In the second one, the flow presents a nonlinear regime, also called return flow. A great diversity of bifurcations have been found just by changing the domain depth d . The results obtained in this work are in agreement with some reported experiments, and give a deeper insight into the effect of physical parameters on bifurcations.

  9. The Total Synthesis Problem of linear multivariable control. II - Unity feedback and the design morphism

    NASA Technical Reports Server (NTRS)

    Sain, M. K.; Antsaklis, P. J.; Gejji, R. R.; Wyman, B. F.; Peczkowski, J. L.

    1981-01-01

    Zames (1981) has observed that there is, in general, no 'separation principle' to guarantee optimality of a division between control law design and filtering of plant uncertainty. Peczkowski and Sain (1978) have solved a model matching problem using transfer functions. Taking into consideration this investigation, Peczkowski et al. (1979) proposed the Total Synthesis Problem (TSP), wherein both the command/output-response and command/control-response are to be synthesized, subject to the plant constraint. The TSP concept can be subdivided into a Nominal Design Problem (NDP), which is not dependent upon specific controller structures, and a Feedback Synthesis Problem (FSP), which is. Gejji (1980) found that NDP was characterized in terms of the plant structural matrices and a single, 'good' transfer function matrix. Sain et al. (1981) have extended this NDP work. The present investigation is concerned with a study of FSP for the unity feedback case. NDP, together with feedback synthesis, is understood as a Total Synthesis Problem.

  10. Epistemological Dimensions in Niels Bohr's Conceptualization of Complementarity

    NASA Astrophysics Data System (ADS)

    Derry, Gregory

    2008-03-01

    Contemporary explications of quantum theory are uniformly ahistorical in their accounts of complementarity. Such accounts typically present complementarity as a physical principle that prohibits simultaneous measurements of certain dynamical quantities or behaviors, attributing this principle to Niels Bohr. This conceptualization of complementarity, however, is virtually devoid of content and is only marginally related to Bohr's actual writing on the topic. Instead, what Bohr presented was a subtle and complex epistemological argument in which complementarity is a shorthand way to refer to an inclusive framework for the logical analysis of ideas. The important point to notice, historically, is that Bohr's work involving complementarity is not intended to be an improvement or addition to a particular physical theory (quantum mechanics), which Bohr regarded as already complete. Bohr's work involving complementarity is actually an argument related to the goals, meaning, and limitations of physical theory itself, grounded in deep epistemological considerations stemming from the fundamental discontinuity of nature on a microscopic scale.

  11. Constructive Processes in Linear Order Problems Revealed by Sentence Study Times

    ERIC Educational Resources Information Center

    Mynatt, Barbee T.; Smith, Kirk H.

    1977-01-01

    This research was a further test of the theory of constructive processes proposed by Foos, Smith, Sabol, and Mynatt (1976) to account for differences among presentation orders in the construction of linear orders. This theory is composed of different series of mental operations that must be performed when an order relationship is integrated with…

  12. On the continuous dependence with respect to sampling of the linear quadratic regulator problem for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.; Wang, C.

    1990-01-01

    The convergence of solutions to the discrete or sampled time linear quadratic regulator problem and associated Riccati equation for infinite dimensional systems to the solutions to the corresponding continuous time problem and equation, as the length of the sampling interval (the sampling rate) tends toward zero (infinity) is established. Both the finite and infinite time horizon problems are studied. In the finite time horizon case, strong continuity of the operators which define the control system and performance index together with a stability and consistency condition on the sampling scheme are required. For the infinite time horizon problem, in addition, the sampled systems must be stabilizable and detectable, uniformly with respect to the sampling rate. Classes of systems for which this condition can be verified are discussed. Results of numerical studies involving the control of a heat/diffusion equation, a hereditary of delay system, and a flexible beam are presented and discussed.

  13. Kalman Duality Principle for a Class of Ill-Posed Minimax Control Problems with Linear Differential-Algebraic Constraints

    SciTech Connect

    Zhuk, Sergiy

    2013-10-15

    In this paper we present Kalman duality principle for a class of linear Differential-Algebraic Equations (DAE) with arbitrary index and time-varying coefficients. We apply it to an ill-posed minimax control problem with DAE constraint and derive a corresponding dual control problem. It turns out that the dual problem is ill-posed as well and so classical optimality conditions are not applicable in the general case. We construct a minimizing sequence u-circumflex{sub {epsilon}} for the dual problem applying Tikhonov method. Finally we represent u-circumflex{sub {epsilon}} in the feedback form using Riccati equation on a subspace which corresponds to the differential part of the DAE.

  14. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  15. On the continuous dependence with respect to sampling of the linear quadratic regulator problem for distributed parameter system

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.; Wang, C.

    1992-01-01

    The convergence of solutions to the discrete- or sampled-time linear quadratic regulator problem and associated Riccati equation for infinite-dimensional systems to the solutions to the corresponding continuous time problem and equation, as the length of the sampling interval (the sampling rate) tends toward zero(infinity) is established. Both the finite-and infinite-time horizon problems are studied. In the finite-time horizon case, strong continuity of the operators that define the control system and performance index, together with a stability and consistency condition on the sampling scheme are required. For the infinite-time horizon problem, in addition, the sampled systems must be stabilizable and detectable, uniformly with respect to the sampling rate. Classes of systems for which this condition can be verified are discussed. Results of numerical studies involving the control of a heat/diffusion equation, a hereditary or delay system, and a flexible beam are presented and discussed.

  16. Solving Capelin Time Series Ecosystem Problem Using Hybrid ANN-GAs Model and Multiple Linear Regression Model

    NASA Astrophysics Data System (ADS)

    Eghnam, Karam M.; Sheta, Alaa F.

    2008-06-01

    Development of accurate models is necessary in critical applications such as prediction. In this paper, a solution to the stock prediction problem of the Barents Sea capelin is introduced using Artificial Neural Network (ANN) and Multiple Linear model Regression (MLR) models. The Capelin stock in the Barents Sea is one of the largest in the world. It normally maintained a fishery with annual catches of up to 3 million tons. The Capelin stock problem has an impact in the fish stock development. The proposed prediction model was developed using an ANNs with their weights adapted using Genetic Algorithm (GA). The proposed model was compared to traditional linear model the MLR. The results showed that the ANN-GA model produced an overall accuracy of 21% better than the MLR model.

  17. From Wave-Particle to Features-Event Complementarity

    NASA Astrophysics Data System (ADS)

    Auletta, G.; Torcal, L.

    2011-12-01

    The terms wave and particle are of classical origin and are inadequate in dealing with the novelties of quantum mechanics with respect to classical physics. In this paper we propose to substitute the wave-particle terminology with that of features-event complementarity. This approach aims at solving some of the problems affecting quantum-mechanics since its birth. In our terminology, features are what is responsible for one of the most characterizing aspects of quantum mechanics: quantum correlations. We suggest that an ( uninterpreted) basic ontology for quantum mechanics should be thought of as constituted by events, features and their dynamical interplay, and that its ( interpreted) theoretical ontology (made up by three classes of theoretical entities: states, observables and properties) does not isomorphically correspond to the uninterpreted ontology. Operations, i.e. concrete interventions within the physical world, like preparation, premeasurement and measurement, together with reliable inferences, assure the bridge between interpreted and uninterpreted ontology.

  18. A convex complementarity approach for simulating large granular flows.

    SciTech Connect

    Tasora, A.; Anitescu, M.; Mathematics and Computer Science; Univ. degli Studi di Parma

    2010-07-01

    Aiming at the simulation of dense granular flows, we propose and test a numerical method based on successive convex complementarity problems. This approach originates from a multibody description of the granular flow: all the particles are simulated as rigid bodies with arbitrary shapes and frictional contacts. Unlike the discrete element method (DEM), the proposed approach does not require small integration time steps typical of stiff particle interaction; this fact, together with the development of optimized algorithms that can run also on parallel computing architectures, allows an efficient application of the proposed methodology to granular flows with a large number of particles. We present an application to the analysis of the refueling flow in pebble-bed nuclear reactors. Extensive validation of our method against both DEM and physical experiments results indicates that essential collective characteristics of dense granular flow are accurately predicted.

  19. Cauchy problem for non-linear systems of equations in the critical case

    NASA Astrophysics Data System (ADS)

    Kaikina, E. I.; Naumkin, P. I.; Shishmarev, I. A.

    2004-12-01

    The large-time asymptotic behaviour is studied for a system of non-linear evolution dissipative equations \\displaystyle u_t+\\mathscr N(u,u)+\\mathscr Lu=0, \\qquad x\\in\\mathbb R^n, \\quad t>0, \\displaystyle u(0,x)=\\widetilde u(x), \\qquad x\\in\\mathbb R^n, where \\mathscr L is a linear pseudodifferential operator \\mathscr Lu=\\overline{\\mathscr F}_{\\xi\\to x}(L(\\xi)\\widehat u(\\xi)) and the non-linearity \\mathscr N is a quadratic pseudodifferential operator \\displaystyle \\mathscr N(u,u)=\\overline{\\mathscr F}_{\\xi\\to x}\\sum_{k,l=1}^m\\int_{\\mathbb R^n}A^{kl}(t,\\xi,y)\\widehat u_k(t,\\xi-y)\\widehat u_l(t,y)\\,dy,where \\widehat u\\equiv\\mathscr F_{x\\to\\xi}u is the Fourier transform. Under the assumptions that the initial data \\widetilde u\\in\\mathbf H^{\\beta,0}\\cap\\mathbf H^{0,\\beta}, \\beta>n/2 are sufficiently small, where \\displaystyle \\mathbf H^{n,m}=\\{\\phi\\in\\mathbf L^2:\\Vert\\langle x\\rangle^m\\lang......\\phi(x)\\Vert _{\\mathbf L^2}<\\infty\\}, \\qquad \\langle x\\rangle=\\sqrt{1+x^2}\\,,is a Sobolev weighted space, and that the total mass vector \\displaystyle M=\\int\\widetilde u(x)\\,dx\

  20. Composite solvers for linear saddle point problems arising from the incompressible Stokes equations with highly heterogeneous viscosity structure

    NASA Astrophysics Data System (ADS)

    Sanan, P.; Schnepp, S. M.; May, D.; Schenk, O.

    2014-12-01

    Geophysical applications require efficient forward models for non-linear Stokes flow on high resolution spatio-temporal domains. The bottleneck in applying the forward model is solving the linearized, discretized Stokes problem which takes the form of a large, indefinite (saddle point) linear system. Due to the heterogeniety of the effective viscosity in the elliptic operator, devising effective preconditioners for saddle point problems has proven challenging and highly problem-dependent. Nevertheless, at least three approaches show promise for preconditioning these difficult systems in an algorithmically scalable way using multigrid and/or domain decomposition techniques. The first is to work with a hierarchy of coarser or smaller saddle point problems. The second is to use the Schur complement method to decouple and sequentially solve for the pressure and velocity. The third is to use the Schur decomposition to devise preconditioners for the full operator. These involve sub-solves resembling inexact versions of the sequential solve. The choice of approach and sub-methods depends crucially on the motivating physics, the discretization, and available computational resources. Here we examine the performance trade-offs for preconditioning strategies applied to idealized models of mantle convection and lithospheric dynamics, characterized by large viscosity gradients. Due to the arbitrary topological structure of the viscosity field in geodynamical simulations, we utilize low order, inf-sup stable mixed finite element spatial discretizations which are suitable when sharp viscosity variations occur in element interiors. Particular attention is paid to possibilities within the decoupled and approximate Schur complement factorization-based monolithic approaches to leverage recently-developed flexible, communication-avoiding, and communication-hiding Krylov subspace methods in combination with `heavy' smoothers, which require solutions of large per-node sub-problems, well

  1. Linear and nonlinear pattern selection in Rayleigh-Benard stability problems

    NASA Technical Reports Server (NTRS)

    Davis, Sanford S.

    1993-01-01

    A new algorithm is introduced to compute finite-amplitude states using primitive variables for Rayleigh-Benard convection on relatively coarse meshes. The algorithm is based on a finite-difference matrix-splitting approach that separates all physical and dimensional effects into one-dimensional subsets. The nonlinear pattern selection process for steady convection in an air-filled square cavity with insulated side walls is investigated for Rayleigh numbers up to 20,000. The internalization of disturbances that evolve into coherent patterns is investigated and transient solutions from linear perturbation theory are compared with and contrasted to the full numerical simulations.

  2. Teacher-Designed Software for Interactive Linear Equations: Concepts, Interpretive Skills, Applications & Word-Problem Solving.

    ERIC Educational Resources Information Center

    Lawrence, Virginia

    No longer just a user of commercial software, the 21st century teacher is a designer of interactive software based on theories of learning. This software, a comprehensive study of straightline equations, enhances conceptual understanding, sketching, graphic interpretive and word problem solving skills as well as making connections to real-life and…

  3. Methodological and Epistemological Issues on Linear Regression Applied to Psychometric Variables in Problem Solving: Rethinking Variance

    ERIC Educational Resources Information Center

    Stamovlasis, Dimitrios

    2010-01-01

    The aim of the present paper is two-fold. First, it attempts to support previous findings on the role of some psychometric variables, such as, M-capacity, the degree of field dependence-independence, logical thinking and the mobility-fixity dimension, on students' achievement in chemistry problem solving. Second, the paper aims to raise some…

  4. Linear perturbative theory of the discrete cosmological N-body problem

    SciTech Connect

    Marcos, B.; Baertschiger, T.; Joyce, M.; Gabrielli, A.; Labini, F. Sylos

    2006-05-15

    We present a perturbative treatment of the evolution under their mutual self-gravity of particles displaced off an infinite perfect lattice, both for a static space and for a homogeneously expanding space as in cosmological N-body simulations. The treatment, analogous to that of perturbations to a crystal in solid state physics, can be seen as a discrete (i.e. particle) generalization of the perturbative solution in the Lagrangian formalism of a self-gravitating fluid. Working to linear order, we show explicitly that this fluid evolution is recovered in the limit that the initial perturbations are restricted to modes of wavelength much larger than the lattice spacing. The full spectrum of eigenvalues of the simxple cubic lattice contains both oscillatory modes and unstable modes which grow slightly faster than in the fluid limit. A detailed comparison of our perturbative treatment, at linear order, with full numerical simulations is presented, for two very different classes of initial perturbation spectra. We find that the range of validity is similar to that of the perturbative fluid approximation (i.e. up to close to ''shell-crossing''), but that the accuracy in tracing the evolution is superior. The formalism provides a powerful tool to systematically calculate discreteness effects at early times in cosmological N-body simulations.

  5. The incomplete inverse and its applications to the linear least squares problem

    NASA Technical Reports Server (NTRS)

    Morduch, G. E.

    1977-01-01

    A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.

  6. A linear decomposition method for large optimization problems. Blueprint for development

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1982-01-01

    A method is proposed for decomposing large optimization problems encountered in the design of engineering systems such as an aircraft into a number of smaller subproblems. The decomposition is achieved by organizing the problem and the subordinated subproblems in a tree hierarchy and optimizing each subsystem separately. Coupling of the subproblems is accounted for by subsequent optimization of the entire system based on sensitivities of the suboptimization problem solutions at each level of the tree to variables of the next higher level. A formalization of the procedure suitable for computer implementation is developed and the state of readiness of the implementation building blocks is reviewed showing that the ingredients for the development are on the shelf. The decomposition method is also shown to be compatible with the natural human organization of the design process of engineering systems. The method is also examined with respect to the trends in computer hardware and software progress to point out that its efficiency can be amplified by network computing using parallel processors.

  7. Performance improvement for optimization of the non-linear geometric fitting problem in manufacturing metrology

    NASA Astrophysics Data System (ADS)

    Moroni, Giovanni; Syam, Wahyudin P.; Petrò, Stefano

    2014-08-01

    Product quality is a main concern today in manufacturing; it drives competition between companies. To ensure high quality, a dimensional inspection to verify the geometric properties of a product must be carried out. High-speed non-contact scanners help with this task, by both speeding up acquisition speed and increasing accuracy through a more complete description of the surface. The algorithms for the management of the measurement data play a critical role in ensuring both the measurement accuracy and speed of the device. One of the most fundamental parts of the algorithm is the procedure for fitting the substitute geometry to a cloud of points. This article addresses this challenge. Three relevant geometries are selected as case studies: a non-linear least-squares fitting of a circle, sphere and cylinder. These geometries are chosen in consideration of their common use in practice; for example the sphere is often adopted as a reference artifact for performance verification of a coordinate measuring machine (CMM) and a cylinder is the most relevant geometry for a pin-hole relation as an assembly feature to construct a complete functioning product. In this article, an improvement of the initial point guess for the Levenberg-Marquardt (LM) algorithm by employing a chaos optimization (CO) method is proposed. This causes a performance improvement in the optimization of a non-linear function fitting the three geometries. The results show that, with this combination, a higher quality of fitting results a smaller norm of the residuals can be obtained while preserving the computational cost. Fitting an ‘incomplete-point-cloud’, which is a situation where the point cloud does not cover a complete feature e.g. from half of the total part surface, is also investigated. Finally, a case study of fitting a hemisphere is presented.

  8. The application of Green's theorem to the solution of boundary-value problems in linearized supersonic wing theory

    NASA Technical Reports Server (NTRS)

    Heaslet, Max A; Lomax, Harvard

    1950-01-01

    Following the introduction of the linearized partial differential equation for nonsteady three-dimensional compressible flow, general methods of solution are given for the two and three-dimensional steady-state and two-dimensional unsteady-state equations. It is also pointed out that, in the absence of thickness effects, linear theory yields solutions consistent with the assumptions made when applied to lifting-surface problems for swept-back plan forms at sonic speeds. The solutions of the particular equations are determined in all cases by means of Green's theorem, and thus depend on the use of Green's equivalent layer of sources, sinks, and doublets. Improper integrals in the supersonic theory are treated by means of Hadamard's "finite part" technique.

  9. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    SciTech Connect

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  10. Exact analysis to any order of the linear coupling problem in the thin lens model

    SciTech Connect

    Ruggiero, A.G.

    1991-12-31

    In this report we attempt the exact solution of the motion of a charged particle in a circular accelerator under the effects of skew quadrupole errors. We adopt the model of error distributions, lumped in locations with zero extensions. This thin-lens approximation provides an analytical insight to the problem to any order. The total solution is expressed in terms of driving terms which are actually correlation factors to several order. An application follows on the calculation and correction of tune-splitting and on the estimate of the role the higher-order terms play in the correction method.

  11. Exact analysis to any order of the linear coupling problem in the thin lens model

    SciTech Connect

    Ruggiero, A.G.

    1991-01-01

    In this report we attempt the exact solution of the motion of a charged particle in a circular accelerator under the effects of skew quadrupole errors. We adopt the model of error distributions, lumped in locations with zero extensions. This thin-lens approximation provides an analytical insight to the problem to any order. The total solution is expressed in terms of driving terms which are actually correlation factors to several order. An application follows on the calculation and correction of tune-splitting and on the estimate of the role the higher-order terms play in the correction method.

  12. Strongly coupled dark energy cosmologies: preserving ΛCDM success and easing low scale problems - I. Linear theory revisited

    NASA Astrophysics Data System (ADS)

    Bonometto, Silvio A.; Mainini, Roberto; Macciò, Andrea V.

    2015-10-01

    In this first paper we discuss the linear theory and the background evolution of a new class of models we dub SCDEW: Strongly Coupled DE, plus WDM. In these models, WDM dominates today's matter density; like baryons, WDM is uncoupled. Dark energy is a scalar field Φ; its coupling to ancillary cold dark matter (CDM), whose today's density is ≪1 per cent, is an essential model feature. Such coupling, in fact, allows the formation of cosmic structures, in spite of very low WDM particle masses (˜100 eV). SCDEW models yield cosmic microwave background and linear large scale features substantially undistinguishable from ΛCDM, but thanks to the very low WDM masses they strongly alleviate ΛCDM issues on small scales, as confirmed via numerical simulations in the second associated paper. Moreover SCDEW cosmologies significantly ease the coincidence and fine tuning problems of ΛCDM and, by using a field theory approach, we also outline possible links with inflationary models. We also discuss a possible fading of the coupling at low redshifts which prevents non-linearities on the CDM component to cause computational problems. The (possible) low-z coupling suppression, its mechanism, and its consequences are however still open questions - not necessarily problems - for SCDEW models. The coupling intensity and the WDM particle mass, although being extra parameters in respect to ΛCDM, are found to be substantially constrained a priori so that, if SCDEW is the underlying cosmology, we expect most data to fit also ΛCDM predictions.

  13. The Prediction Properties of Inverse and Reverse Regression for the Simple Linear Calibration Problem

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.

    2010-01-01

    The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.

  14. On the solution of two-point linear differential eigenvalue problems. [numerical technique with application to Orr-Sommerfeld equation

    NASA Technical Reports Server (NTRS)

    Antar, B. N.

    1976-01-01

    A numerical technique is presented for locating the eigenvalues of two point linear differential eigenvalue problems. The technique is designed to search for complex eigenvalues belonging to complex operators. With this method, any domain of the complex eigenvalue plane could be scanned and the eigenvalues within it, if any, located. For an application of the method, the eigenvalues of the Orr-Sommerfeld equation of the plane Poiseuille flow are determined within a specified portion of the c-plane. The eigenvalues for alpha = 1 and R = 10,000 are tabulated and compared for accuracy with existing solutions.

  15. Extended cubic B-spline method for solving a linear system of second-order boundary value problems.

    PubMed

    Heilat, Ahmed Salem; Hamid, Nur Nadiah Abd; Ismail, Ahmad Izani Md

    2016-01-01

    A method based on extended cubic B-spline is proposed to solve a linear system of second-order boundary value problems. In this method, two free parameters, [Formula: see text] and [Formula: see text], play an important role in producing accurate results. Optimization of these parameters are carried out and the truncation error is calculated. This method is tested on three examples. The examples suggest that this method produces comparable or more accurate results than cubic B-spline and some other methods. PMID:27547688

  16. A Comparative Study of the Harmonic and Arithmetic Averaging of Diffusion Coefficients for Non-linear Heat Conduction Problems

    SciTech Connect

    Samet Y. Kadioglu; Robert R. Nourgaliev; Vincent A. Mousseau

    2008-03-01

    We perform a comparative study for the harmonic versus arithmetic averaging of the heat conduction coefficient when solving non-linear heat transfer problems. In literature, the harmonic average is the method of choice, because it is widely believed that the harmonic average is more accurate model. However, our analysis reveals that this is not necessarily true. For instance, we show a case in which the harmonic average is less accurate when a coarser mesh is used. More importantly, we demonstrated that if the boundary layers are finely resolved, then the harmonic and arithmetic averaging techniques are identical in the truncation error sense. Our analysis further reveals that the accuracy of these two techniques depends on how the physical problem is modeled.

  17. Modeling Granular Materials as Compressible Non-Linear Fluids: Heat Transfer Boundary Value Problems

    SciTech Connect

    Massoudi, M.C.; Tran, P.X.

    2006-01-01

    We discuss three boundary value problems in the flow and heat transfer analysis in flowing granular materials: (i) the flow down an inclined plane with radiation effects at the free surface; (ii) the natural convection flow between two heated vertical walls; (iii) the shearing motion between two horizontal flat plates with heat conduction. It is assumed that the material behaves like a continuum, similar to a compressible nonlinear fluid where the effects of density gradients are incorporated in the stress tensor. For a fully developed flow the equations are simplified to a system of three nonlinear ordinary differential equations. The equations are made dimensionless and a parametric study is performed where the effects of various dimensionless numbers representing the effects of heat conduction, viscous dissipation, radiation, and so forth are presented.

  18. Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging

    SciTech Connect

    Fowler, Michael James

    2014-04-25

    In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy

  19. Horizon complementarity in elliptic de Sitter space

    NASA Astrophysics Data System (ADS)

    Hackl, Lucas; Neiman, Yasha

    2015-02-01

    We study a quantum field in elliptic de Sitter space dS4/Z2—the spacetime obtained from identifying antipodal points in dS4. We find that the operator algebra and Hilbert space cannot be defined for the entire space, but only for observable causal patches. This makes the system into an explicit realization of the horizon complementarity principle. In the absence of a global quantum theory, we propose a recipe for translating operators and states between observers. This translation involves information loss, in accordance with the fact that two observers see different patches of the spacetime. As a check, we recover the thermal state at the de Sitter temperature as a state that appears the same to all observers. This thermal state arises from the same functional that, in ordinary dS4, describes the Bunch-Davies vacuum.

  20. Exploring equivalence domain in non-linear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-02-01

    This paper presents a methodology to sample equivalence domain (ED) in non-linear PDE-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo (MCMC) algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of Magneotelluric, Controlled-source Electromagnetic (EM) and Global EM induction data.

  1. Nature-inspired computing approach for solving non-linear singular Emden-Fowler problem arising in electromagnetic theory

    NASA Astrophysics Data System (ADS)

    Khan, Junaid Ali; Zahoor Raja, Muhammad Asif; Rashidi, Mohammad Mehdi; Syam, Muhammad Ibrahim; Majid Wazwaz, Abdul

    2015-10-01

    In this research, the well-known non-linear Lane-Emden-Fowler (LEF) equations are approximated by developing a nature-inspired stochastic computational intelligence algorithm. A trial solution of the model is formulated as an artificial feed-forward neural network model containing unknown adjustable parameters. From the LEF equation and its initial conditions, an energy function is constructed that is used in the algorithm for the optimisation of the networks in an unsupervised way. The proposed scheme is tested successfully by applying it on various test cases of initial value problems of LEF equations. The reliability and effectiveness of the scheme are validated through comprehensive statistical analysis. The obtained numerical results are in a good agreement with their corresponding exact solutions, which confirms the enhancement made by the proposed approach.

  2. Morse Index and Linear Stability of the Lagrangian Circular Orbit in a Three-Body-Type Problem Via Index Theory

    NASA Astrophysics Data System (ADS)

    Barutello, Vivina; Jadanza, Riccardo D.; Portaluri, Alessandro

    2016-01-01

    It is well known that the linear stability of the Lagrangian elliptic solutions in the classical planar three-body problem depends on a mass parameter β and on the eccentricity e of the orbit. We consider only the circular case ( e = 0) but under the action of a broader family of singular potentials: α-homogeneous potentials, for α in (0, 2), and the logarithmic one. It turns out indeed that the Lagrangian circular orbit persists also in this more general setting. We discover a region of linear stability expressed in terms of the homogeneity parameter α and the mass parameter β, then we compute the Morse index of this orbit and of its iterates and we find that the boundary of the stability region is the envelope of a family of curves on which the Morse indices of the iterates jump. In order to conduct our analysis we rely on a Maslov-type index theory devised and developed by Y. Long, X. Hu and S. Sun; a key role is played by an appropriate index theorem and by some precise computations of suitable Maslov-type indices.

  3. A Posteriori Bounds for Linear-Functional Outputs of Crouzeix-Raviart Finite Element Discretizations of the Incompressible Stokes Problem

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Paraschivoiu, Marius

    1998-01-01

    We present a finite element technique for the efficient generation of lower and upper bounds to outputs which are linear functionals of the solutions to the incompressible Stokes equations in two space dimensions; the finite element discretization is effected by Crouzeix-Raviart elements, the discontinuous pressure approximation of which is central to our approach. The bounds are based upon the construction of an augmented Lagrangian: the objective is a quadratic "energy" reformulation of the desired output; the constraints are the finite element equilibrium equations (including the incompressibility constraint), and the intersubdomain continuity conditions on velocity. Appeal to the dual max-min problem for appropriately chosen candidate Lagrange multipliers then yields inexpensive bounds for the output associated with a fine-mesh discretization; the Lagrange multipliers are generated by exploiting an associated coarse-mesh approximation. In addition to the requisite coarse-mesh calculations, the bound technique requires solution only of local subdomain Stokes problems on the fine-mesh. The method is illustrated for the Stokes equations, in which the outputs of interest are the flowrate past, and the lift force on, a body immersed in a channel.

  4. Parallel High Order Accuracy Methods Applied to Non-Linear Hyperbolic Equations and to Problems in Materials Sciences

    SciTech Connect

    Jan Hesthaven

    2012-02-06

    Final report for DOE Contract DE-FG02-98ER25346 entitled Parallel High Order Accuracy Methods Applied to Non-Linear Hyperbolic Equations and to Problems in Materials Sciences. Principal Investigator Jan S. Hesthaven Division of Applied Mathematics Brown University, Box F Providence, RI 02912 Jan.Hesthaven@Brown.edu February 6, 2012 Note: This grant was originally awarded to Professor David Gottlieb and the majority of the work envisioned reflects his original ideas. However, when Prof Gottlieb passed away in December 2008, Professor Hesthaven took over as PI to ensure proper mentoring of students and postdoctoral researchers already involved in the project. This unusual circumstance has naturally impacted the project and its timeline. However, as the report reflects, the planned work has been accomplished and some activities beyond the original scope have been pursued with success. Project overview and main results The effort in this project focuses on the development of high order accurate computational methods for the solution of hyperbolic equations with application to problems with strong shocks. While the methods are general, emphasis is on applications to gas dynamics with strong shocks.

  5. A toy model of black hole complementarity

    NASA Astrophysics Data System (ADS)

    Banerjee, Souvik; Bryan, Jan-Willem; Papadodimas, Kyriakos; Raju, Suvrat

    2016-05-01

    We consider the algebra of simple operators defined in a time band in a CFT with a holographic dual. When the band is smaller than the light crossing time of AdS, an entire causal diamond in the center of AdS is separated from the band by a horizon. We show that this algebra obeys a version of the Reeh-Schlieder theorem: the action of the algebra on the CFT vacuum can approximate any low energy state in the CFT arbitrarily well, but no operator within the algebra can exactly annihilate the vacuum. We show how to relate local excitations in the complement of the central diamond to simple operators in the band. Local excitations within the diamond are invisible to the algebra of simple operators in the band by causality, but can be related to complicated operators called "precursors". We use the Reeh-Schlieder theorem to write down a simple and explicit formula for these precursors on the boundary. We comment on the implications of our results for black hole complementarity and the emergence of bulk locality from the boundary.

  6. Quark lepton complementarity and renormalization group effects

    SciTech Connect

    Schmidt, Michael A.; Smirnov, Alexei Yu.

    2006-12-01

    We consider a scenario for the quark-lepton complementarity relations between mixing angles in which the bimaximal mixing follows from the neutrino mass matrix. According to this scenario in the lowest order the angle {theta}{sub 12} is {approx}1{sigma} (1.5 degree sign -2 degree sign ) above the best fit point coinciding practically with the tribimaximal mixing prediction. Realization of this scenario in the context of the seesaw type-I mechanism with leptonic Dirac mass matrices approximately equal to the quark mass matrices is studied. We calculate the renormalization group corrections to {theta}{sub 12} as well as to {theta}{sub 13} in the standard model (SM) and minimal supersymmetric standard model (MSSM). We find that in a large part of the parameter space corrections {delta}{theta}{sub 12} are small or negligible. In the MSSM version of the scenario, the correction {delta}{theta}{sub 12} is in general positive. Small negative corrections appear in the case of an inverted mass hierarchy and opposite CP parities of {nu}{sub 1} and {nu}{sub 2} when leading contributions to {theta}{sub 12} running are strongly suppressed. The corrections are negative in the SM version in a large part of the parameter space for values of the relative CP phase of {nu}{sub 1} and {nu}{sub 2}: {phi}>{pi}/2.

  7. Bohr's Principle of Complementarity and Beyond

    NASA Astrophysics Data System (ADS)

    Jones, R.

    2004-05-01

    All knowledge is of an approximate character and always will be (Russell, Human Knowledge, 1948, pg 497,507). The laws of nature are not unique (Smolin, Three Roads to Quantum Gravity, 2001, pg 195). There may be a number of different sets of equations which describe our data just as well as the present known laws do (Mitchell, Machine Learning, 1997, pg 65-66 and Cooper, Machine Learning, Vol. 9, 1992, pg 319) In the future every field of intellectual study will possess multiple theories of its domain and scientific work and engineering will be performed based on the ensemble predictions of ALL of these. In some cases the theories may be quite divergent, differing greatly one from the other. The idea can be considered an extension of Bohr's notions of complementarity, "...different experimental arrangements.. described by different physical concepts...together and only together exhaust the definable information we can obtain about the object" (Folse, The Philosophy of Niels Bohr, 1985, pg 238). This idea is not postmodernism. Witchdoctor's theories will not form a part of medical science. Objective data, not human opinion, will decide which theories we use and how we weight their predictions.

  8. On some problems in a theory of thermally and mechanically interacting continuous media. Ph.D. Thesis; [linearized theory of interacting mixture of elastic solid and viscous fluid

    NASA Technical Reports Server (NTRS)

    Lee, Y. M.

    1971-01-01

    Using a linearized theory of thermally and mechanically interacting mixture of linear elastic solid and viscous fluid, we derive a fundamental relation in an integral form called a reciprocity relation. This reciprocity relation relates the solution of one initial-boundary value problem with a given set of initial and boundary data to the solution of a second initial-boundary value problem corresponding to a different initial and boundary data for a given interacting mixture. From this general integral relation, reciprocity relations are derived for a heat-conducting linear elastic solid, and for a heat-conducting viscous fluid. An initial-boundary value problem is posed and solved for the mixture of linear elastic solid and viscous fluid. With the aid of the Laplace transform and the contour integration, a real integral representation for the displacement of the solid constituent is obtained as one of the principal results of the analysis.

  9. A Semi-linear Backward Parabolic Cauchy Problem with Unbounded Coefficients of Hamilton–Jacobi–Bellman Type and Applications to Optimal Control

    SciTech Connect

    Addona, Davide

    2015-08-15

    We obtain weighted uniform estimates for the gradient of the solutions to a class of linear parabolic Cauchy problems with unbounded coefficients. Such estimates are then used to prove existence and uniqueness of the mild solution to a semi-linear backward parabolic Cauchy problem, where the differential equation is the Hamilton–Jacobi–Bellman equation of a suitable optimal control problem. Via backward stochastic differential equations, we show that the mild solution is indeed the value function of the controlled equation and that the feedback law is verified.

  10. Existence and spectral theory for weak solutions of Neumann and Dirichlet problems for linear degenerate elliptic operators with rough coefficients

    NASA Astrophysics Data System (ADS)

    Monticelli, Dario D.; Rodney, Scott

    2015-10-01

    In this paper we study existence and spectral properties for weak solutions of Neumann and Dirichlet problems associated with second order linear degenerate elliptic partial differential operators X with rough coefficients, of the form X = - div (P∇) + HR +S‧ G + F, where the n × n matrix function P = P (x) is nonnegative definite and allowed to degenerate, R, S are families of subunit vector fields, G, H are vector valued functions and F is a scalar function. We operate in a geometric homogeneous space setting and we assume the validity of certain Sobolev and Poincaré inequalities related to a symmetric nonnegative definite matrix of weights Q = Q (x) that is comparable to P; we do not assume that the underlying measure is doubling. We give a maximum principle for weak solutions of Xu ≤ 0, and we follow this with a result describing a relationship between compact projection of the degenerate Sobolev space QH 1, p, related to the matrix of weights Q, into Lq and a Poincaré inequality with gain adapted to Q.

  11. Sign problem in full configuration interaction quantum Monte Carlo: Linear and sublinear representation regimes for the exact wave function

    NASA Astrophysics Data System (ADS)

    Shepherd, James J.; Scuseria, Gustavo E.; Spencer, James S.

    2014-10-01

    We investigate the sign problem for full configuration interaction quantum Monte Carlo (FCIQMC), a stochastic algorithm for finding the ground-state solution of the Schrödinger equation with substantially reduced computational cost compared with exact diagonalization. We find k -space Hubbard models for which the solution is yielded with storage that grows sublinearly in the size of the many-body Hilbert space, in spite of using a wave function that is simply a linear combination of states. The FCIQMC algorithm is able to find this sublinear scaling regime without bias and with only a choice of the Hamiltonian basis. By means of a demonstration we solve for the energy of a 70-site half-filled system (with a space of 1038 determinants) in 250 core hours, substantially quicker than the ˜1036 core hours that would be required by exact diagonalization. This is the largest space that has been sampled in an unbiased fashion. The challenge for the recently developed FCIQMC method is made clear: Expand the sublinear scaling regime while retaining exact-on-average accuracy. We comment upon the relationship between this and the scaling law previously observed in the initiator adaptation (i-FCIQMC). We argue that our results change the landscape for the development of FCIQMC and related methods.

  12. Goertler vortices in growing boundary layers: The leading edge receptivity problem, linear growth and the nonlinear breakdown stage

    NASA Technical Reports Server (NTRS)

    Hall, Philip

    1989-01-01

    Goertler vortices are thought to be the cause of transition in many fluid flows of practical importance. A review of the different stages of vortex growth is given. In the linear regime, nonparallel effects completely govern this growth, and parallel flow theories do not capture the essential features of the development of the vortices. A detailed comparison between the parallel and nonparallel theories is given and it is shown that at small vortex wavelengths, the parallel flow theories have some validity; otherwise nonparallel effects are dominant. New results for the receptivity problem for Goertler vortices are given; in particular vortices induced by free stream perturbations impinging on the leading edge of the walls are considered. It is found that the most dangerous mode of this type can be isolated and it's neutral curve is determined. This curve agrees very closely with the available experimental data. A discussion of the different regimes of growth of nonlinear vortices is also given. Again it is shown that, unless the vortex wavelength is small, nonparallel effects are dominant. Some new results for nonlinear vortices of 0(1) wavelengths are given and compared to experimental observations.

  13. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    NASA Astrophysics Data System (ADS)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using

  14. Fault tolerant control for switching discrete-time systems with delays: an improved cone complementarity approach

    NASA Astrophysics Data System (ADS)

    Benzaouia, Abdellah; Ouladsine, Mustapha; Ananou, Bouchra

    2014-10-01

    In this paper, fault tolerant control problem for discrete-time switching systems with delay is studied. Sufficient conditions of building an observer are obtained by using multiple Lyapunov function. These conditions are worked out in a new way, using cone complementarity technique, to obtain new LMIs with slack variables and multiple weighted residual matrices. The obtained results are applied on a numerical example showing fault detection, localisation of fault and reconfiguration of the control to maintain asymptotic stability even in the presence of a permanent sensor fault.

  15. Bilevel formulation of a policy design problem considering multiple objectives and incomplete preferences

    NASA Astrophysics Data System (ADS)

    Hawthorne, Bryant; Panchal, Jitesh H.

    2014-07-01

    A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.

  16. Information complementarity in multipartite quantum states and security in cryptography

    NASA Astrophysics Data System (ADS)

    Bera, Anindita; Kumar, Asutosh; Rakshit, Debraj; Prabhu, R.; SenDe, Aditi; Sen, Ujjwal

    2016-03-01

    We derive complementarity relations for arbitrary quantum states of multiparty systems of any number of parties and dimensions between the purity of a part of the system and several correlation quantities, including entanglement and other quantum correlations as well as classical and total correlations, of that part with the remainder of the system. We subsequently use such a complementarity relation between purity and quantum mutual information in the tripartite scenario to provide a bound on the secret key rate for individual attacks on a quantum key distribution protocol.

  17. Problem Based Learning Technique and Its Effect on Acquisition of Linear Programming Skills by Secondary School Students in Kenya

    ERIC Educational Resources Information Center

    Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice

    2015-01-01

    The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…

  18. The Effects of the Concrete-Representational-Abstract Integration Strategy on the Ability of Students with Learning Disabilities to Multiply Linear Expressions within Area Problems

    ERIC Educational Resources Information Center

    Strickland, Tricia K.; Maccini, Paula

    2013-01-01

    We examined the effects of the Concrete-Representational-Abstract Integration strategy on the ability of secondary students with learning disabilities to multiply linear algebraic expressions embedded within contextualized area problems. A multiple-probe design across three participants was used. Results indicated that the integration of the…

  19. ELAS: A general-purpose computer program for the equilibrium problems of linear structures. Volume 2: Documentation of the program. [subroutines and flow charts

    NASA Technical Reports Server (NTRS)

    Utku, S.

    1969-01-01

    A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.

  20. Steady induction effects in geomagnetism. Part 1B: Geomagnetic estimation of steady surficial core motions: A non-linear inverse problem

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1993-01-01

    The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.

  1. Couple Complementarity and Similarity: A Review of the Literature.

    ERIC Educational Resources Information Center

    White, Stephen G.; Hatcher, Chris

    1984-01-01

    Examines couple complementarity and similarity, and their relationship to dyadic adjustment, from three perspectives: social/psychological research, clinical populations research, and the observations of family therapists. Methodological criticisms are discussed suggesting that the evidence for a relationship between similarity and…

  2. Development and application of a local linearization algorithm for the integration of quaternion rate equations in real-time flight simulation problems

    NASA Technical Reports Server (NTRS)

    Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.

    1973-01-01

    High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.

  3. On the limitations of linear beams for the problems of moving mass-beam interaction using a meshfree method

    NASA Astrophysics Data System (ADS)

    Kiani, Keivan; Nikkhoo, Ali

    2012-02-01

    This paper deals with the capabilities of linear and nonlinear beam theories in predicting the dynamic response of an elastically supported thin beam traversed by a moving mass. To this end, the discrete equations of motion are developed based on Lagrange's equations via reproducing kernel particle method (RKPM). For a particular case of a simply supported beam, Galerkin method is also employed to verify the results obtained by RKPM, and a reasonably good agreement is achieved. Variations of the maximum dynamic deflection and bending moment associated with the linear and nonlinear beam theories are investigated in terms of moving mass weight and velocity for various beam boundary conditions. It is demonstrated that for majority of the moving mass velocities, the differences between the results of linear and nonlinear analyses become remarkable as the moving mass weight increases, particularly for high levels of moving mass velocity. Except for the cantilever beam, the nonlinear beam theory predicts higher possibility of moving mass separation from the base beam compared to the linear one. Furthermore, the accuracy levels of the linear beam theory are determined for thin beams under large deflections and small rotations as a function of moving mass weight and velocity in various boundary conditions.

  4. Graphing the Model or Modeling the Graph? Not-so-Subtle Problems in Linear IS-LM Analysis.

    ERIC Educational Resources Information Center

    Alston, Richard M.; Chi, Wan Fu

    1989-01-01

    Outlines the differences between the traditional and modern theoretical models of demand for money. States that the two models are often used interchangeably in textbooks, causing ambiguity. Argues against the use of linear specifications that imply that income velocity can increase without limit and that autonomous components of aggregate demand…

  5. Sexual complementarity between host humoral toxicity and soldier caste in a polyembryonic wasp.

    PubMed

    Uka, Daisuke; Sakamoto, Takuma; Yoshimura, Jin; Iwabuchi, Kikuo

    2016-01-01

    Defense against enemies is a type of natural selection considered fundamentally equivalent between the sexes. In reality, however, whether males and females differ in defense strategy is unknown. Multiparasitism necessarily leads to the problem of defense for a parasite (parasitoid). The polyembryonic parasitic wasp Copidosoma floridanum is famous for its larval soldiers' ability to kill other parasites. This wasp also exhibits sexual differences not only with regard to the competitive ability of the soldier caste but also with regard to host immune enhancement. Female soldiers are more aggressive than male soldiers, and their numbers increase upon invasion of the host by other parasites. In this report, in vivo and in vitro competition assays were used to test whether females have a toxic humoral factor; if so, then its strength was compared with that of males. We found that females have a toxic factor that is much weaker than that of males. Our results imply sexual complementarity between host humoral toxicity and larval soldiers. We discuss how this sexual complementarity guarantees adaptive advantages for both males and females despite the one-sided killing of male reproductives by larval female soldiers in a mixed-sex brood. PMID:27385149

  6. Sexual complementarity between host humoral toxicity and soldier caste in a polyembryonic wasp

    PubMed Central

    Uka, Daisuke; Sakamoto, Takuma; Yoshimura, Jin; Iwabuchi, Kikuo

    2016-01-01

    Defense against enemies is a type of natural selection considered fundamentally equivalent between the sexes. In reality, however, whether males and females differ in defense strategy is unknown. Multiparasitism necessarily leads to the problem of defense for a parasite (parasitoid). The polyembryonic parasitic wasp Copidosoma floridanum is famous for its larval soldiers’ ability to kill other parasites. This wasp also exhibits sexual differences not only with regard to the competitive ability of the soldier caste but also with regard to host immune enhancement. Female soldiers are more aggressive than male soldiers, and their numbers increase upon invasion of the host by other parasites. In this report, in vivo and in vitro competition assays were used to test whether females have a toxic humoral factor; if so, then its strength was compared with that of males. We found that females have a toxic factor that is much weaker than that of males. Our results imply sexual complementarity between host humoral toxicity and larval soldiers. We discuss how this sexual complementarity guarantees adaptive advantages for both males and females despite the one-sided killing of male reproductives by larval female soldiers in a mixed-sex brood. PMID:27385149

  7. Theory of bimolecular reactions in a solution with linear traps: Application to the problem of target search on DNA

    NASA Astrophysics Data System (ADS)

    Turkin, Alexander; van Oijen, Antoine M.; Turkin, Anatoliy A.

    2015-11-01

    One-dimensional sliding along DNA as a means to accelerate protein target search is a well-known phenomenon occurring in various biological systems. Using a biomimetic approach, we have recently demonstrated the practical use of DNA-sliding peptides to speed up bimolecular reactions more than an order of magnitude by allowing the reactants to associate not only in the solution by three-dimensional (3D) diffusion, but also on DNA via one-dimensional (1D) diffusion [A. Turkin et al., Chem. Sci. (2015), 10.1039/C5SC03063C]. Here we present a mean-field kinetic model of a bimolecular reaction in a solution with linear extended sinks (e.g., DNA) that can intermittently trap molecules present in a solution. The model consists of chemical rate equations for mean concentrations of reacting species. Our model demonstrates that addition of linear traps to the solution can significantly accelerate reactant association. We show that at optimum concentrations of linear traps the 1D reaction pathway dominates in the kinetics of the bimolecular reaction; i.e., these 1D traps function as an assembly line of the reaction product. Moreover, we show that the association reaction on linear sinks between trapped reactants exhibits a nonclassical third-order behavior. Predictions of the model agree well with our experimental observations. Our model provides a general description of bimolecular reactions that are controlled by a combined 3D+1D mechanism and can be used to quantitatively describe both naturally occurring as well as biomimetic biochemical systems that reduce the dimensionality of search.

  8. Saul Rosenzweig's purview: from experimenter/experimentee complementarity to idiodynamics.

    PubMed

    Rosenzweig, Saul

    2004-06-01

    Following a brief personal biography, an exposition of Saul Rosenzweig's scientific contributions is presented. Starting in 1933 with experimenter/experimentee complementarity, this point of view was extended to implicit common factors in psychotherapy Rosenzweig (1936) then to the complementary pattern of the so-called schools of psychology Rosenzweig (1937). Similarly, converging approaches in personality theory emerged as another type of complementarity Rosenzweig (1944a). The three types of norms-nomothetic, demographic, and idiodynamic-within the range of dynamic human behavior were formulated and led to idiodynamics as a successor to personality theory. This formulation included the concept of the idioverse, defined as a self-creative and experiential population of events, which opened up a methodology (psychoarcheology) for reconstructing the creativity of outstanding scientific and artistic craftsmen like William James and Sigmund Freud, among psychologists, and Henry James, Herman Melville, and Nathaniel Hawthorne among writers of fiction. PMID:15151802

  9. A linear programming manual

    NASA Technical Reports Server (NTRS)

    Tuey, R. C.

    1972-01-01

    Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.

  10. Palaeointensities from Pliocene lava sequences in Iceland: emphasis on the problem of Arai plot with two linear segments

    NASA Astrophysics Data System (ADS)

    Tanaka, Hidefumi; Yamamoto, Yuhji

    2016-05-01

    Palaeointensity experiments were carried out to a sample collection from two sections of basalt lava flow sequences of Pliocene age in north central Iceland (Chron C2An) to further refine the knowledge of the behaviour of the palaeomagnetic field. Selection of samples was mainly based on their stability of remanence to thermal demagnetization as well as good reversibility in variations of magnetic susceptibility and saturation magnetization with temperature, which would indicate the presence of magnetite as a product of deuteric oxidation of titanomagnetite. Among 167 lava flows from two sections, 44 flows were selected for the Königsberger-Thellier-Thellier experiment in vacuum. In spite of careful pre-selection of samples, an Arai plot with two linear segments, or a concave-up appearance, was often encountered during the experiments. This non-ideal behaviour was probably caused by an irreversible change in the domain state of the magnetic grains of the pseudo-single-domain (PSD) range. This is assumed because an ideal linear plot was obtained in the second run of the palaeointensity experiment in which a laboratory thermoremanence acquired after the final step of the first run was used as a natural remanence. This experiment was conducted on six selected samples, and no clear difference between the magnetic grains of the experimented and pristine sister samples was found by scanning electron microscope and hysteresis measurements, that is, no occurrence of notable chemical/mineralogical alteration, suggesting that no change in the grain size distribution had occurred. Hence, the two-segment Arai plot was not caused by the reversible multidomain/PSD effect in which the curvature of the Arai plot is dependent on the grain size. Considering that the irreversible change in domain state must have affected data points at not only high temperatures but also low temperatures, fv ≥ 0.5 was adopted as one of the acceptance criteria where fv is a vectorially defined

  11. Saptio-temporal complementarity of wind and solar power in India

    NASA Astrophysics Data System (ADS)

    Lolla, Savita; Baidya Roy, Somnath; Chowdhury, Sourangshu

    2015-04-01

    Wind and solar power are likely to be a part of the solution to the climate change problem. That is why they feature prominently in the energy policies of all industrial economies including India. One of the major hindrances that is preventing an explosive growth of wind and solar energy is the issue of intermittency. This is a major problem because in a rapidly moving economy, energy production must match the patterns of energy demand. Moreover, sudden increase and decrease in energy supply may destabilize the power grids leading to disruptions in power supply. In this work we explore if the patterns of variability in wind and solar energy availability can offset each other so that a constant supply can be guaranteed. As a first step, this work focuses on seasonal-scale variability for each of the 5 regional power transmission grids in India. Communication within each grid is better than communication between grids. Hence, it is assumed that the grids can switch sources relatively easily. Wind and solar resources are estimated using the MERRA Reanalysis data for the 1979-2013 period. Solar resources are calculated with a 20% conversion efficiency. Wind resources are estimated using a 2 MW turbine power curve. Total resources are obtained by optimizing location and number of wind/solar energy farms. Preliminary results show that the southern and western grids are more appropriate for cogeneration than the other grids. Many studies on wind-solar cogeneration have focused on temporal complementarity at local scale. However, this is one of the first studies to explore spatial complementarity over regional scales. This project may help accelerate renewable energy penetration in India by identifying regional grid(s) where the renewable energy intermittency problem can be minimized.

  12. On the interaction structure of linear multi-input feedback control systems. M.S. Thesis; [problem solving, lattices (mathematics)

    NASA Technical Reports Server (NTRS)

    Wong, P. K.

    1975-01-01

    The closely-related problems of designing reliable feedback stabilization strategy and coordinating decentralized feedbacks are considered. Two approaches are taken. A geometric characterization of the structure of control interaction (and its dual) was first attempted and a concept of structural homomorphism developed based on the idea of 'similarity' of interaction pattern. The idea of finding classes of individual feedback maps that do not 'interfere' with the stabilizing action of each other was developed by identifying the structural properties of nondestabilizing and LQ-optimal feedback maps. Some known stability properties of LQ-feedback were generalized and some partial solutions were provided to the reliable stabilization and decentralized feedback coordination problems. A concept of coordination parametrization was introduced, and a scheme for classifying different modes of decentralization (information, control law computation, on-line control implementation) in control systems was developed.

  13. The linear quadratic optimal control problem for infinite dimensional systems over an infinite horizon - Survey and examples

    NASA Technical Reports Server (NTRS)

    Bensoussan, A.; Delfour, M. C.; Mitter, S. K.

    1976-01-01

    Available published results are surveyed for a special class of infinite-dimensional control systems whose evolution is characterized by a semigroup of operators of class C subscript zero. Emphasis is placed on an approach that clarifies the system-theoretic relationship among controllability, stabilizability, stability, and the existence of a solution to an associated operator equation of the Riccati type. Formulation of the optimal control problem is reviewed along with the asymptotic behavior of solutions to a general system of equations and several theorems concerning L2 stability. Examples are briefly discussed which involve second-order parabolic systems, first-order hyperbolic systems, and distributed boundary control.

  14. Analysis and algorithms for a regularized Cauchy problem arising from a non-linear elliptic PDE for seismic velocity estimation

    SciTech Connect

    Cameron, M.K.; Fomel, S.B.; Sethian, J.A.

    2009-01-01

    In the present work we derive and study a nonlinear elliptic PDE coming from the problem of estimation of sound speed inside the Earth. The physical setting of the PDE allows us to pose only a Cauchy problem, and hence is ill-posed. However we are still able to solve it numerically on a long enough time interval to be of practical use. We used two approaches. The first approach is a finite difference time-marching numerical scheme inspired by the Lax-Friedrichs method. The key features of this scheme is the Lax-Friedrichs averaging and the wide stencil in space. The second approach is a spectral Chebyshev method with truncated series. We show that our schemes work because of (1) the special input corresponding to a positive finite seismic velocity, (2) special initial conditions corresponding to the image rays, (3) the fact that our finite-difference scheme contains small error terms which damp the high harmonics; truncation of the Chebyshev series, and (4) the need to compute the solution only for a short interval of time. We test our numerical scheme on a collection of analytic examples and demonstrate a dramatic improvement in accuracy in the estimation of the sound speed inside the Earth in comparison with the conventional Dix inversion. Our test on the Marmousi example confirms the effectiveness of the proposed approach.

  15. High-order entropy-based closures for linear transport in slab geometry II: A computational study of the optimization problem

    SciTech Connect

    Hauck, Cory D; Alldredge, Graham; Tits, Andre

    2012-01-01

    We present a numerical algorithm to implement entropy-based (M{sub N}) moment models in the context of a simple, linear kinetic equation for particles moving through a material slab. The closure for these models - as is the case for all entropy-based models - is derived through the solution of constrained, convex optimization problem. The algorithm has two components. The first component is a discretization of the moment equations which preserves the set of realizable moments, thereby ensuring that the optimization problem has a solution (in exact arithmetic). The discretization is a second-order kinetic scheme which uses MUSCL-type limiting in space and a strong-stability-preserving, Runge-Kutta time integrator. The second component of the algorithm is a Newton-based solver for the dual optimization problem, which uses an adaptive quadrature to evaluate integrals in the dual objective and its derivatives. The accuracy of the numerical solution to the dual problem plays a key role in the time step restriction for the kinetic scheme. We study in detail the difficulties in the dual problem that arise near the boundary of realizable moments, where quadrature formulas are less reliable and the Hessian of the dual objection function is highly ill-conditioned. Extensive numerical experiments are performed to illustrate these difficulties. In cases where the dual problem becomes 'too difficult' to solve numerically, we propose a regularization technique to artificially move moments away from the realizable boundary in a way that still preserves local particle concentrations. We present results of numerical simulations for two challenging test problems in order to quantify the characteristics of the optimization solver and to investigate when and how frequently the regularization is needed.

  16. Solving MPCC Problem with the Hyperbolic Penalty Function

    NASA Astrophysics Data System (ADS)

    Melo, Teófilo; Monteiro, M. Teresa T.; Matias, João

    2011-09-01

    The main goal of this work is to solve mathematical program with complementarity constraints (MPCC) using nonlinear programming techniques (NLP). An hyperbolic penalty function is used to solve MPCC problems by including the complementarity constraints in the penalty term. This penalty function [1] is twice continuously differentiable and combines features of both exterior and interior penalty methods. A set of AMPL problems from MacMPEC [2] are tested and a comparative study is performed.

  17. The methodological lesson of complementarity: Bohr’s naturalistic epistemology

    NASA Astrophysics Data System (ADS)

    Folse, H. J.

    2014-12-01

    Bohr’s intellectual journey began with the recognition that empirical phenomena implied the breakdown of classical mechanics in the atomic domain; this, in turn, led to his adoption of the ‘quantum postulate’ that justifies the ‘stationary states’ of his atomic model of 1913. His endeavor to develop a wider conceptual framework harmonizing both classical and quantum descriptions led to his proposal of the new methodological goals and standards of complementarity. Bohr’s claim that an empirical discovery can demand methodological revision justifies regarding his epistemological lesson as supporting a naturalistic epistemology.

  18. Complementarity of the maldacena and randall-sundrum pictures

    PubMed

    Duff; Liu

    2000-09-01

    We revive an old result, that one-loop corrections to the graviton propagator induce 1/r(3) corrections to the Newtonian gravitational potential, and compute the coefficient due to closed loops of the U(N) N = 4 super-Yang-Mills theory that arises in Maldacena's anti-de Sitter conformal field theory correspondence. We find exact agreement with the coefficient appearing in the Randall-Sundrum brane-world proposal. This provides more evidence for the complementarity of the two pictures. PMID:10970461

  19. Indivisibility, Complementarity and Ontology: A Bohrian Interpretation of Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Roldán-Charria, Jairo

    2014-12-01

    The interpretation of quantum mechanics presented in this paper is inspired by two ideas that are fundamental in Bohr's writings: indivisibility and complementarity. Further basic assumptions of the proposed interpretation are completeness, universality and conceptual economy. In the interpretation, decoherence plays a fundamental role for the understanding of measurement. A general and precise conception of complementarity is proposed. It is fundamental in this interpretation to make a distinction between ontological reality, constituted by everything that does not depend at all on the collectivity of human beings, nor on their decisions or limitations, nor on their existence, and empirical reality constituted by everything that not being ontological is, however, intersubjective. According to the proposed interpretation, neither the dynamical properties, nor the constitutive properties of microsystems like mass, charge and spin, are ontological. The properties of macroscopic systems and space-time are also considered to belong to empirical reality. The acceptance of the above mentioned conclusion does not imply a total rejection of the notion of ontological reality. In the paper, utilizing the Aristotelian ideas of general cause and potentiality, a relation between ontological reality and empirical reality is proposed. Some glimpses of ontological reality, in the form of what can be said about it, are finally presented.

  20. Products of weak values: Uncertainty relations, complementarity, and incompatibility

    NASA Astrophysics Data System (ADS)

    Hall, Michael J. W.; Pati, Arun Kumar; Wu, Junde

    2016-05-01

    The products of weak values of quantum observables are shown to be of value in deriving quantum uncertainty and complementarity relations, for both weak and strong measurement statistics. First, a "product representation formula" allows the standard Heisenberg uncertainty relation to be derived from a classical uncertainty relation for complex random variables. We show this formula also leads to strong uncertainty relations for unitary operators and underlies an interpretation of weak values as optimal (complex) estimates of quantum observables. Furthermore, we show that two incompatible observables that are weakly and strongly measured in a weak measurement context obey a complementarity relation under the interchange of these observables, in the form of an upper bound on the product of the corresponding weak values. Moreover, general tradeoff relations between weak purity, quantum purity, and quantum incompatibility, and also between weak and strong joint probability distributions, are obtained based on products of real and imaginary components of weak values, where these relations quantify the degree to which weak probabilities can take anomalous values in a given context.

  1. Complementarity as a Function of Stage in Therapy: An Analysis of Minuchin's Structural Family Therapy.

    ERIC Educational Resources Information Center

    Laird, Heather; Vande Kemp, Hendrika

    1987-01-01

    Explored the level of family therapist complementarity in the early, middle and late stages of therapy performing a micro-analysis of Salvador Minuchin with one family in successful therapy. Level of therapist complementarity was signficantly greater in the early and late stages than in the middle stage, and was significantly correlated with…

  2. Linking Quantitative and Qualitative Distance Education Research through Complementarity. ZIFF Papiere 56.

    ERIC Educational Resources Information Center

    Rothe, J. Peter

    This article focuses on the linkage between the quantitative and qualitative distance education research methods. The concept that serves as the conceptual link is termed "complementarity." The definition of complementarity emerges through a simulated study of FernUniversitat's mentors. The study shows that in the case of the mentors, educational…

  3. Interpersonal Complementarity in the Mental Health Intake: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Rosen, Daniel C.; Miller, Alisa B.; Nakash, Ora; Halperin, Lucila; Alegria, Margarita

    2012-01-01

    The study examined which socio-demographic differences between clients and providers influenced interpersonal complementarity during an initial intake session; that is, behaviors that facilitate harmonious interactions between client and provider. Complementarity was assessed using blinded ratings of 114 videotaped intake sessions by trained…

  4. Determination of the mixing between active neutrinos and sterile neutrino through the quark-lepton complementarity and self-complementarity

    NASA Astrophysics Data System (ADS)

    Ke, Hong-Wei; Liu, Tan; Li, Xue-Qian

    2014-09-01

    It is suggested that there is an underlying symmetry which relates the quark and lepton sectors. Namely, among the mixing matrix elements of Cabibbo-Kobayashi-Maskawa for quarks and Pontecorvo-Maki-Nakawaga-Sakata for leptons there exist complementarity relations at a high energy scale (such as the seesaw or even the grand unification theory scales). We assume that the relations would remain during the matrix elements running down to the electroweak scale. Observable breaking of the rational relation is attributed to the existence of sterile neutrinos that mix with the active neutrino to result in the observable Pontecorvo-Maki-Nakawaga-Sakata matrix. We show that involvement of a sterile in the (3+1) model induces that |Ue4|2=0.040, |Uμ4|2=0.009, and sin22α =0.067. We also find a new self-complementarity ϑ12+ϑ23+ϑ13+α ≈90°. The numbers are generally consistent with those obtained by fitting recent measurements, especially in this scenario, to the existence of a sterile neutrino that does not upset the LEP data; i.e., the number of neutrino types is very close to 3.

  5. On the need to consider kinetic as well as thermodynamic consequences of the parking problem in quantitative studies of nonspecific binding between proteins and linear polymer chains.

    PubMed

    Munro, P D; Jackson, C M; Winzor, D J

    1998-04-20

    Attention is drawn to a need for caution in the thermodynamic characterization of nonspecific binding of a large ligand to a linear acceptor such as a polynucleotide or a polysaccharide-because of the potential for misidentification of a transient (pseudoequilibrium) state as true equilibrium. The time course of equilibrium attainment during the binding of a large ligand to nonspecific three-residue sequences of a linear acceptor lattice has been simulated, either by numerical integration of the system of ordinary differential equations or by a Monte Carlo procedure, to identify the circumstances under which the kinetics of elimination of suboptimal ligand attachment (called the parking problem) create such difficulties. These simulations have demonstrated that the potential for the existence of a transient plateau in the time course of equilibrium attainment increases greatly (i) with increasing extent of acceptor saturation (i.e., with increasing ligand concentration), (ii) with increasing magnitude of the binding constant, and (iii) with increasing length of the acceptor lattice. Because the capacity of the polymer lattice for ligand is most readily determined under conditions conducive to essentially stoichiometric interaction, the parameter so obtained is thus likely to reflect the transient (irreversible) rather than equilibrium binding capacity. A procedure is described for evaluating the equilibrium capacity from that irreversible parameter; and illustrated by application to published results [M. Nesheim, M.N. Blackburn, C.M. Lawler, K.G. Mann, J. Biol. Chem. 261 (1986) 3214-3221] for the stoichiometric titration of heparin with thrombin. PMID:17029698

  6. a Semiclassical Analysis of a Detuned Ring Laser with a Saturable Absorber: New Results for the Steady States and a Formulation of the Linearized Stability Problem.

    NASA Astrophysics Data System (ADS)

    Chyba, David Edward

    This dissertation presents new results for the steady states of a detuned ring laser with a saturable absorber. The treatment is based on a semiclassical model which assumes homogeneously broadened two-level atoms. Part 1 presents a solution of the Maxwell-Bloch equations for the longitudinal dependence of the steady states of this system. The solution is then simplified by use of the mean field approximation. Graphical results in the mean field approximation are presented for squared electric field versus operating frequency, and for each of these versus cavity tuning and laser excitation. Various cavity linewidths and both resonant and non-resonant amplifier and absorber line center frequencies are considered. The most notable finding is that cavity detuning breaks the degeneracies previously found in the steady state solutions to the fully tuned case. This lead to the prediction that an actual system will bifurcate from the zero intensity solution to a steady state solution as laser excitation increases from zero, rather than to the small amplitude pulsations found for the model with mathematically exact tuning of the cavity and the media line centers. Other phenomena suggested by the steady state results include tuning-dependent hysteresis and bistability, and instability due to the appearance of another steady state solution. Results for the case in which the media have different line center frequencies suggest non-monotonic behavior of the electric field amplitude as laser excitation varies, as well as hysteresis and bistability. Part 2 presents a formulation of the linearized stability problem for the steady state solutions discussed in the first part. Thus the effects of detuning and the other parameters describing the system is incorporated into the stability analysis. The equations of the system are linearized about both the mean field steady states and about the longitudinally dependent steady states. Expansion in Fourier spatial modes is used in the

  7. Complementarity of Neutrinoless Double Beta Decay and Cosmology

    SciTech Connect

    Dodelson, Scott; Lykken, Joseph

    2014-03-20

    Neutrinoless double beta decay experiments constrain one combination of neutrino parameters, while cosmic surveys constrain another. This complementarity opens up an exciting range of possibilities. If neutrinos are Majorana particles, and the neutrino masses follow an inverted hierarchy, then the upcoming sets of both experiments will detect signals. The combined constraints will pin down not only the neutrino masses but also constrain one of the Majorana phases. If the hierarchy is normal, then a beta decay detection with the upcoming generation of experiments is unlikely, but cosmic surveys could constrain the sum of the masses to be relatively heavy, thereby producing a lower bound for the neutrinoless double beta decay rate, and therefore an argument for a next generation beta decay experiment. In this case as well, a combination of the phases will be constrained.

  8. Complementarity of quantum discord and classically accessible information

    DOE PAGESBeta

    Zwolak, Michael P.; Zurek, Wojciech H.

    2013-05-20

    The sum of the Holevo quantity (that bounds the capacity of quantum channels to transmit classical information about an observable) and the quantum discord (a measure of the quantumness of correlations of that observable) yields an observable-independent total given by the quantum mutual information. This split naturally delineates information about quantum systems accessible to observers – information that is redundantly transmitted by the environment – while showing that it is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. In addition, we prove an anti-symmetry property relating accessible information and discord. Itmore » shows that information becomes objective – accessible to many observers – only as quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. Lastly, the resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality while flagrantly quantum superpositions are out of reach.« less

  9. Complementarity of quantum discord and classically accessible information

    SciTech Connect

    Zwolak, Michael P.; Zurek, Wojciech H.

    2013-05-20

    The sum of the Holevo quantity (that bounds the capacity of quantum channels to transmit classical information about an observable) and the quantum discord (a measure of the quantumness of correlations of that observable) yields an observable-independent total given by the quantum mutual information. This split naturally delineates information about quantum systems accessible to observers – information that is redundantly transmitted by the environment – while showing that it is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. In addition, we prove an anti-symmetry property relating accessible information and discord. It shows that information becomes objective – accessible to many observers – only as quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. Lastly, the resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality while flagrantly quantum superpositions are out of reach.

  10. Interference and complementarity for two-photon hybrid entangled states

    SciTech Connect

    Nogueira, W. A. T.; Santibanez, M.; Delgado, A.; Saavedra, C.; Neves, L.; Lima, G.; Padua, S.

    2010-10-15

    In this work we generate two-photon hybrid entangled states (HESs), where the polarization of one photon is entangled with the transverse spatial degree of freedom of the second photon. The photon pair is created by parametric down-conversion in a polarization-entangled state. A birefringent double-slit couples the polarization and spatial degrees of freedom of these photons, and finally, suitable spatial and polarization projections generate the HES. We investigate some interesting aspects of the two-photon hybrid interference and present this study in the context of the complementarity relation that exists between the visibility of the one-photon and that of the two-photon interference patterns.