The treatment of contact problems as a non-linear complementarity problem
Bjorkman, G.
1994-12-31
Contact and friction problems are of great importance in many engineering applications, for example in ball bearings, bolted joints, metal forming and also car crashes. In these problems the behavior on the contact surface has a great influence on the overall behavior of the structure. Often problems such as wear and initiation of cracks occur on the contact surface. Contact problems are often described using complementarity conditions, w {>=} 0, p {>=} 0, w{sup T}p = 0, which for example represents the following behavior: (i) two bodies can not penetrate each other, i.e. the gap must be greater than or equal to zero, (ii) the contact pressure is positive and different from zero only if the two bodies are in contact with each other. Here it is shown that by using the theory of non-linear complementarity problems the unilateral behavior of the problem can be treated in a straightforward way. It is shown how solution methods for discretized frictionless contact problem can be formulated. By formulating the problem either as a generalized equation or as a B-differentiable function, it is pointed out how Newton`s method may be extended to contact problems. Also an algorithm for tracing the equilibrium path of frictionless contact problems is described. It is shown that, in addition to the {open_quotes}classical{close_quotes} bifurcation and limit points, there can be points where the equilibrium path has reached an end point or points where bifurcation is possible even if the stiffness matrix is non-singular.
Smoothing of mixed complementarity problems
Gabriel, S.A.; More, J.J.
1995-09-01
The authors introduce a smoothing approach to the mixed complementarity problem, and study the limiting behavior of a path defined by approximate minimizers of a nonlinear least squares problem. The main result guarantees that, under a mild regularity condition, limit points of the iterates are solutions to the mixed complementarity problem. The analysis is applicable to a wide variety of algorithms suitable for large-scale mixed complementarity problems.
Global methods for nonlinear complementarity problems
More, J.J.
1994-04-01
Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations approach, or use continuation to trace a path defined by a smooth system of nonlinear equations. We formulate the nonlinear complementarity problem as a bound-constrained nonlinear least squares problem. Algorithms based on this formulation are applicable to general nonlinear complementarity problems, can be started from any nonnegative starting point, and each iteration only requires the solution of systems of linear equations. Convergence to a solution of the nonlinear complementarity problem is guaranteed under reasonable regularity assumptions. The converge rate is Q-linear, Q-superlinear, or Q-quadratic, depending on the tolerances used to solve the subproblems.
NASA Astrophysics Data System (ADS)
Čepon, Gregor; Boltežar, Miha
2009-01-01
The aim of this study was to develop an efficient and realistic numerical model in order to predict the dynamic response of belt drives. The belt was modeled as a planar beam element based on an absolute nodal coordinate formulation. A viscoelastic material was adopted for the belt and the corresponding damping and stiffness matrices were determined. The belt-pulley contact was formulated as a linear complementarity problem together with a penalty method. This made it possible for us to accurately predict the contact forces, including the stick and slip zones between the belt and the pulley. The belt-drive model was verified by comparing it with the available analytical solutions. A good agreement was found. Finally, the applicability of the method was demonstrated by considering non-steady belt-drive operating conditions.
New Existence Conditions for Order Complementarity Problems
NASA Astrophysics Data System (ADS)
Németh, S. Z.
2009-09-01
Complementarity problems are mathematical models of problems in economics, engineering and physics. A special class of complementarity problems are the order complementarity problems [2]. Order complementarity problems can be applied in lubrication theory [6] and economics [1]. The notion of exceptional family of elements for general order complementarity problems in Banach spaces will be introduced. It will be shown that for general order complementarity problems defined by completely continuous fields the problem has either a solution or an exceptional family of elements (for other notions of exceptional family of elements see [1, 2, 3, 4] and the related references therein). This solves a conjecture of [2] about the existence of exceptional family of elements for order complementarity problems. The proof can be done by using the Leray-Schauder alternative [5]. An application to integral operators will be given.
The fully actuated traffic control problem solved by global optimization and complementarity
NASA Astrophysics Data System (ADS)
Ribeiro, Isabel M.; de Lurdes de Oliveira Simões, Maria
2016-02-01
Global optimization and complementarity are used to determine the signal timing for fully actuated traffic control, regarding effective green and red times on each cycle. The average values of these parameters can be used to estimate the control delay of vehicles. In this article, a two-phase queuing system for a signalized intersection is outlined, based on the principle of minimization of the total waiting time for the vehicles. The underlying model results in a linear program with linear complementarity constraints, solved by a sequential complementarity algorithm. Departure rates of vehicles during green and yellow periods were treated as deterministic, while arrival rates of vehicles were assumed to follow a Poisson distribution. Several traffic scenarios were created and solved. The numerical results reveal that it is possible to use global optimization and complementarity over a reasonable number of cycles and determine with efficiency effective green and red times for a signalized intersection.
A path-following interior-point algorithm for linear and quadratic problems
Wright, S.J.
1993-12-01
We describe an algorithm for the monotone linear complementarity problem that converges for many positive, not necessarily feasible, starting point and exhibits polynomial complexity if some additional assumptions are made on the starting point. If the problem has a strictly complementary solution, the method converges subquadratically. We show that the algorithm and its convergence extend readily to the mixed monotone linear complementarity problem and, hence, to all the usual formulations of the linear programming and convex quadratic programming problems.
Neural networks for nonlinear and mixed complementarity problems and their applications.
Dang, Chuangyin; Leung, Yee; Gao, Xing-Bao; Chen, Kai-zhou
2004-03-01
This paper presents two feedback neural networks for solving a nonlinear and mixed complementarity problem. The first feedback neural network is designed to solve the strictly monotone problem. This one has no parameter and possesses a very simple structure for implementation in hardware. Based on a new idea, the second feedback neural network for solving the monotone problem is constructed by using the first one as a subnetwork. This feedback neural network has the least number of state variables. The stability of a solution of the problem is proved. When the problem is strictly monotone, the unique solution is uniformly and asymptotically stable in the large. When the problem has many solutions, it is guaranteed that, for any initial point, the trajectory of the network does converge to an exact solution of the problem. Feasibility and efficiency of the proposed neural networks are supported by simulation experiments. Moreover, the feedback neural network can also be applied to solve general nonlinear convex programming and nonlinear monotone variational inequalities problems with convex constraints.
Quantum Algorithm for Linear Programming Problems
NASA Astrophysics Data System (ADS)
Joag, Pramod; Mehendale, Dhananjay
The quantum algorithm (PRL 103, 150502, 2009) solves a system of linear equations with exponential speedup over existing classical algorithms. We show that the above algorithm can be readily adopted in the iterative algorithms for solving linear programming (LP) problems. The first iterative algorithm that we suggest for LP problem follows from duality theory. It consists of finding nonnegative solution of the equation forduality condition; forconstraints imposed by the given primal problem and for constraints imposed by its corresponding dual problem. This problem is called the problem of nonnegative least squares, or simply the NNLS problem. We use a well known method for solving the problem of NNLS due to Lawson and Hanson. This algorithm essentially consists of solving in each iterative step a new system of linear equations . The other iterative algorithms that can be used are those based on interior point methods. The same technique can be adopted for solving network flow problems as these problems can be readily formulated as LP problems. The suggested quantum algorithm cansolveLP problems and Network Flow problems of very large size involving millions of variables.
The Vertical Linear Fractional Initialization Problem
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Hartley, Tom T.
1999-01-01
This paper presents a solution to the initialization problem for a system of linear fractional-order differential equations. The scalar problem is considered first, and solutions are obtained both generally and for a specific initialization. Next the vector fractional order differential equation is considered. In this case, the solution is obtained in the form of matrix F-functions. Some control implications of the vector case are discussed. The suggested method of problem solution is shown via an example.
Numerical linear algebra for reconstruction inverse problems
NASA Astrophysics Data System (ADS)
Nachaoui, Abdeljalil
2004-01-01
Our goal in this paper is to discuss various issues we have encountered in trying to find and implement efficient solvers for a boundary integral equation (BIE) formulation of an iterative method for solving a reconstruction problem. We survey some methods from numerical linear algebra, which are relevant for the solution of this class of inverse problems. We motivate the use of our constructing algorithm, discuss its implementation and mention the use of preconditioned Krylov methods.
Linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, F. K. B.
1980-01-01
Problem involves design of controls for linear time-invariant system disturbed by white noise. Solution is Kalman filter coupled through set of optimal regulator gains to produce desired control signal. Key to solution is solving matrix Riccati differential equation. LSOCE effectively solves problem for wide range of practical applications. Program is written in FORTRAN IV for batch execution and has been implemented on IBM 360.
Dynamics of Kepler problem with linear drag
NASA Astrophysics Data System (ADS)
Margheri, Alessandro; Ortega, Rafael; Rebelo, Carlota
2014-09-01
We study the dynamics of Kepler problem with linear drag. We prove that motions with nonzero angular momentum have no collisions and travel from infinity to the singularity. In the process, the energy takes all real values and the angular velocity becomes unbounded. We also prove that there are two types of linear motions: capture-collision and ejection-collision. The behaviour of solutions at collisions is the same as in the conservative case. Proofs are obtained using the geometric theory of ordinary differential equations and two regularizations for the singularity of Kepler problem equation. The first, already considered in Diacu (Celest Mech Dyn Astron 75:1-15, 1999), is mainly used for the study of the linear motions. The second, the well known Levi-Civita transformation, allows to complete the study of the asymptotic values of the energy and to prove the existence of collision solutions with arbitrary energy.
Drinkers and Bettors: Investigating the Complementarity of Alcohol Consumption and Problem Gambling
Maclean, Johanna Catherine; Ettner, Susan L.
2009-01-01
Regulated gambling is a multi-billion dollar industry in the United States with greater than 100 percent increases in revenue over the past decade. Along with this rise in gambling popularity and gaming options comes an increased risk of addiction and the associated social costs. This paper focuses on the effect of alcohol use on gambling-related problems. Variables correlated with both alcohol use and gambling may be difficult to observe, and the inability to include these items in empirical models may bias coefficient estimates. After addressing the endogeneity of alcohol use when appropriate, we find strong evidence that problematic gambling and alcohol consumption are complementary activities. PMID:18430523
An algorithm for linearizing convex extremal problems
Gorskaya, Elena S
2010-06-09
This paper suggests a method of approximating the solution of minimization problems for convex functions of several variables under convex constraints is suggested. The main idea of this approach is the approximation of a convex function by a piecewise linear function, which results in replacing the problem of convex programming by a linear programming problem. To carry out such an approximation, the epigraph of a convex function is approximated by the projection of a polytope of greater dimension. In the first part of the paper, the problem is considered for functions of one variable. In this case, an algorithm for approximating the epigraph of a convex function by a polygon is presented, it is shown that this algorithm is optimal with respect to the number of vertices of the polygon, and exact bounds for this number are obtained. After this, using an induction procedure, the algorithm is generalized to certain classes of functions of several variables. Applying the suggested method, polynomial algorithms for an approximate calculation of the L{sub p}-norm of a matrix and of the minimum of the entropy function on a polytope are obtained. Bibliography: 19 titles.
An algorithm for linearizing convex extremal problems
NASA Astrophysics Data System (ADS)
Gorskaya, Elena S.
2010-06-01
This paper suggests a method of approximating the solution of minimization problems for convex functions of several variables under convex constraints is suggested. The main idea of this approach is the approximation of a convex function by a piecewise linear function, which results in replacing the problem of convex programming by a linear programming problem. To carry out such an approximation, the epigraph of a convex function is approximated by the projection of a polytope of greater dimension. In the first part of the paper, the problem is considered for functions of one variable. In this case, an algorithm for approximating the epigraph of a convex function by a polygon is presented, it is shown that this algorithm is optimal with respect to the number of vertices of the polygon, and exact bounds for this number are obtained. After this, using an induction procedure, the algorithm is generalized to certain classes of functions of several variables. Applying the suggested method, polynomial algorithms for an approximate calculation of the L_p-norm of a matrix and of the minimum of the entropy function on a polytope are obtained. Bibliography: 19 titles.
Numerical stability in problems of linear algebra.
NASA Technical Reports Server (NTRS)
Babuska, I.
1972-01-01
Mathematical problems are introduced as mappings from the space of input data to that of the desired output information. Then a numerical process is defined as a prescribed recurrence of elementary operations creating the mapping of the underlying mathematical problem. The ratio of the error committed by executing the operations of the numerical process (the roundoff errors) to the error introduced by perturbations of the input data (initial error) gives rise to the concept of lambda-stability. As examples, several processes are analyzed from this point of view, including, especially, old and new processes for solving systems of linear algebraic equations with tridiagonal matrices. In particular, it is shown how such a priori information can be utilized as, for instance, a knowledge of the row sums of the matrix. Information of this type is frequently available where the system arises in connection with the numerical solution of differential equations.
Brachistochrone problem with linear and quadratic drag
NASA Astrophysics Data System (ADS)
Cherkasov, O. Yu.; Zarodnyuk, A. V.
2014-12-01
Motion of the material point in vertical plane is considered under assumption, that gravitational field and atmosphere are homogeneous. The problem is to determine the shape of the trajectory, ensuring the maximum horizontal distance from initial position for fixed time interval. Problem formulated above is close to the famous brachistohrone problem with friction. Maximum Principle is applied to reduce optimal problem to the boundary-value problem for the system of two nonlinear differential equations. Qualitative analysis of this system allows to determine typical features of the optimal trajectories.
A multistage linear array assignment problem
NASA Technical Reports Server (NTRS)
Nicol, David M.; Shier, D. R.; Kincaid, R. K.; Richards, D. S.
1988-01-01
The implementation of certain algorithms on parallel processing computing architectures can involve partitioning contiguous elements into a fixed number of groups, each of which is to be handled by a single processor. It is desired to find an assignment of elements to processors that minimizes the sum of the maximum workloads experienced at each stage. This problem can be viewed as a multi-objective network optimization problem. Polynomially-bounded algorithms are developed for the case of two stages, whereas the associated decision problem (for an arbitrary number of stages) is shown to be NP-complete. Heuristic procedures are therefore proposed and analyzed for the general problem. Computational experience with one of the exact problems, incorporating certain pruning rules, is presented with one of the exact problems. Empirical results also demonstrate that one of the heuristic procedures is especially effective in practice.
An amoeboid algorithm for solving linear transportation problem
NASA Astrophysics Data System (ADS)
Gao, Cai; Yan, Chao; Zhang, Zili; Hu, Yong; Mahadevan, Sankaran; Deng, Yong
2014-03-01
Transportation Problem (TP) is one of the basic operational research problems, which plays an important role in many practical applications. In this paper, a bio-inspired mathematical model is proposed to handle the Linear Transportation Problem (LTP) in directed networks by modifying the original amoeba model Physarum Solver. Several examples are used to prove that the provided model can effectively solve Balanced Transportation Problem (BTP), Unbalanced Transportation Problem (UTP), especially the Generalized Transportation Problem (GTP), in a nondiscrete way.
Bioethical pluralism and complementarity.
Grinnell, Frederick; Bishop, Jeffrey P; McCullough, Laurence B
2002-01-01
This essay presents complementarity as a novel feature of bioethical pluralism. First introduced by Neils Bohr in conjunction with quantum physics, complementarity in bioethics occurs when different perspectives account for equally important features of a situation but are mutually exclusive. Unlike conventional approaches to bioethical pluralism, which attempt in one fashion or another to isolate and choose between different perspectives, complementarity accepts all perspectives. As a result, complementarity results in a state of holistic, dynamic tension, rather than one that yields singular or final moral judgments.
Singular linear-quadratic control problem for systems with linear delay
Sesekin, A. N.
2013-12-18
A singular linear-quadratic optimization problem on the trajectories of non-autonomous linear differential equations with linear delay is considered. The peculiarity of this problem is the fact that this problem has no solution in the class of integrable controls. To ensure the existence of solutions is required to expand the class of controls including controls with impulse components. Dynamical systems with linear delay are used to describe the motion of pantograph from the current collector with electric traction, biology, etc. It should be noted that for practical problems fact singularity criterion of quality is quite commonly occurring, and therefore the study of these problems is surely important. For the problem under discussion optimal programming control contained impulse components at the initial and final moments of time is constructed under certain assumptions on the functional and the right side of the control system.
Experiences with linear solvers for oil reservoir simulation problems
Joubert, W.; Janardhan, R.; Biswas, D.; Carey, G.
1996-12-31
This talk will focus on practical experiences with iterative linear solver algorithms used in conjunction with Amoco Production Company`s Falcon oil reservoir simulation code. The goal of this study is to determine the best linear solver algorithms for these types of problems. The results of numerical experiments will be presented.
Multisplitting for linear, least squares and nonlinear problems
Renaut, R.
1996-12-31
In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.
Complementarity, Sets and Numbers
ERIC Educational Resources Information Center
Otte, M.
2003-01-01
Niels Bohr's term "complementarity" has been used by several authors to capture the essential aspects of the cognitive and epistemological development of scientific and mathematical concepts. In this paper we will conceive of complementarity in terms of the dual notions of extension and intension of mathematical terms. A complementarist approach…
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
NASA Astrophysics Data System (ADS)
Howard, Don
2013-04-01
Complementarity is Niels Bohr's most original contribution to the interpretation of quantum mechanics, but there is widespread confusion about complementarity in the popular literature and even in some of the serious scholarly literature on Bohr. This talk provides a historically grounded guide to Bohr's own understanding of the doctrine, emphasizing the manner in which complementarity is deeply rooted in the physics of the quantum world, in particular the physics of entanglement, and is, therefore, not just an idiosyncratic philosophical addition. Among the more specific points to be made are that complementarity is not to be confused with wave-particle duality, that it is importantly different from Heisenberg's idea of observer-induced limitations on measurability, and that it is in no way an expression of a positivist philosophical project.
Robust output regulation problem for linear time-delay systems
NASA Astrophysics Data System (ADS)
Lu, Maobin; Huang, Jie
2015-06-01
In this paper, we study the robust output regulation problem for linear systems with input time-delay. By extending the internal model design method to linear time-delay systems, we have established solvability conditions for the problem by both dynamic state feedback control and dynamic output feedback control. The advantages of internal model approach over the feedforward design approach are that it can handle perturbations of the uncertain parameters in the plant and the control law, and it does not need to solve the regulator equations.
Inverse modelling problems in linear algebra undergraduate courses
NASA Astrophysics Data System (ADS)
Martinez-Luaces, Victor E.
2013-10-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different presentations will be discussed. Finally, several results will be presented and some conclusions proposed.
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
NASA Astrophysics Data System (ADS)
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
2014-06-01
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α-. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen's method is employed to find a compromise solution, supported by illustrative numerical example.
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
2014-06-19
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
On the linear properties of the nonlinear radiative transfer problem
NASA Astrophysics Data System (ADS)
Pikichyan, H. V.
2016-11-01
In this report, we further expose the assertions made in nonlinear problem of reflection/transmission of radiation from a scattering/absorbing one-dimensional anisotropic medium of finite geometrical thickness, when both of its boundaries are illuminated by intense monochromatic radiative beams. The new conceptual element of well-defined, so-called, linear images is noteworthy. They admit a probabilistic interpretation. In the framework of nonlinear problem of reflection/transmission of radiation, we derive solution which is similar to linear case. That is, the solution is reduced to the linear combination of linear images. By virtue of the physical meaning, these functions describe the reflectivity and transmittance of the medium for a single photon or their beam of unit intensity, incident on one of the boundaries of the layer. Thereby the medium in real regime is still under the bilateral illumination by external exciting radiation of arbitrary intensity. To determine the linear images, we exploit three well known methods of (i) adding of layers, (ii) its limiting form, described by differential equations of invariant imbedding, and (iii) a transition to the, so-called, functional equations of the "Ambartsumyan's complete invariance".
Towards an ideal preconditioner for linearized Navier-Stokes problems
Murphy, M.F.
1996-12-31
Discretizing certain linearizations of the steady-state Navier-Stokes equations gives rise to nonsymmetric linear systems with indefinite symmetric part. We show that for such systems there exists a block diagonal preconditioner which gives convergence in three GMRES steps, independent of the mesh size and viscosity parameter (Reynolds number). While this {open_quotes}ideal{close_quotes} preconditioner is too expensive to be used in practice, it provides a useful insight into the problem. We then consider various approximations to the ideal preconditioner, and describe the eigenvalues of the preconditioned systems. Finally, we compare these preconditioners numerically, and present our conclusions.
Successive linear optimization approach to the dynamic traffic assignment problem
Ho, J.K.
1980-11-01
A dynamic model for the optimal control of traffic flow over a network is considered. The model, which treats congestion explicitly in the flow equations, gives rise to nonlinear, nonconvex mathematical programming problems. It has been shown for a piecewise linear version of this model that a global optimum is contained in the set of optimal solutions of a certain linear program. A sufficient condition for optimality which implies that a global optimum can be obtained by successively optimizing at most N + 1 objective functions for the linear program, where N is the number of time periods in the planning horizon is presented. Computational results are reported to indicate the efficiency of this approach.
An analytically solvable eigenvalue problem for the linear elasticity equations.
Day, David Minot; Romero, Louis Anthony
2004-07-01
Analytic solutions are useful for code verification. Structural vibration codes approximate solutions to the eigenvalue problem for the linear elasticity equations (Navier's equations). Unfortunately the verification method of 'manufactured solutions' does not apply to vibration problems. Verification books (for example [2]) tabulate a few of the lowest modes, but are not useful for computations of large numbers of modes. A closed form solution is presented here for all the eigenvalues and eigenfunctions for a cuboid solid with isotropic material properties. The boundary conditions correspond physically to a greased wall.
A recurrent neural network for solving bilevel linear programming problem.
He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian
2014-04-01
In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.
Solution of the multiple dosing problem using linear programming.
Hacisalihzade, S S; Mansour, M
1985-07-01
A system theoretical approach to drug concentration-time data analysis is introduced after the discussion of some relevant concepts as they are used in system theory. The merits of this approach are demonstrated in multiple dosing problem. It is shown that dosage minimization without stringent constraints does not result in the desired therapeutic effect. In a different optimization the discrepancy between the actual and the desired time-histories of the relevant substance's plasma concentration is minimized. It is shown that both of these optimizations can be reduced to linear programming problems which are easily solvable with today's computers. These methods are demonstrated in a case study of dopaminergic substitution in Parkinson's disease where computer simulations show them to yield excellent results. Finally, the limits of this approach are also discussed.
NASA Astrophysics Data System (ADS)
Bousso, Raphael
2013-06-01
The near-horizon field B of an old black hole is maximally entangled with the early Hawking radiation R, by unitarity of the S-matrix. But B must be maximally entangled with the black hole interior A, by the equivalence principle. Causal patch complementarity fails to reconcile these conflicting requirements. The system B can be probed by a freely falling observer while there is still time to turn around and remain outside the black hole. Therefore, the entangled state of the BR system is dictated by unitarity even in the infalling patch. If, by monogamy of entanglement, B is not entangled with A, the horizon is replaced by a singularity or “firewall.” To illustrate the radical nature of the ideas that are needed, I briefly discuss two approaches for avoiding a firewall: the identification of A with a subsystem of R; and a combination of patch complementarity with the Horowitz-Maldacena final-state proposal.
Conformal complementarity maps
NASA Astrophysics Data System (ADS)
Barbón, José L. F.; Rabinovici, Eliezer
2013-12-01
We study quantum cosmological models for certain classes of bang/crunch singularities, using the duality between expanding bubbles in AdS with a FRW interior cosmology and perturbed CFTs on de Sitter space-time. It is pointed out that horizon complementarity in the AdS bulk geometries is realized as a conformal transformation in the dual deformed CFT. The quantum version of this map is described in full detail in a toy model involving conformal quantum mechanics. In this system the complementarity map acts as an exact duality between eternal and apocalyptic Hamiltonian evolutions. We calculate the commutation relation between the Hamiltonians corresponding to the different frames. It vanishes only on scale invariant states.
Point source reconstruction principle of linear inverse problems
NASA Astrophysics Data System (ADS)
Terazono, Yasushi; Fujimaki, Norio; Murata, Tsutomu; Matani, Ayumu
2010-11-01
Exact point source reconstruction for underdetermined linear inverse problems with a block-wise structure was studied. In a block-wise problem, elements of a source vector are partitioned into blocks. Accordingly, a leadfield matrix, which represents the forward observation process, is also partitioned into blocks. A point source is a source having only one nonzero block. An example of such a problem is current distribution estimation in electroencephalography and magnetoencephalography, where a source vector represents a vector field and a point source represents a single current dipole. In this study, the block-wise norm, a block-wise extension of the ellp-norm, was defined as the family of cost functions of the inverse method. The main result is that a set of three conditions was found to be necessary and sufficient for block-wise norm minimization to ensure exact point source reconstruction for any leadfield matrix that admit such reconstruction. The block-wise norm that satisfies the conditions is the sum of the cost of all the observations of source blocks, or in other words, the block-wisely extended leadfield-weighted ell1-norm. Additional results are that minimization of such a norm always provides block-wisely sparse solutions and that its solutions form cones in source space.
Inverse problems for linear hyperbolic equations using mixed formulations
NASA Astrophysics Data System (ADS)
Cîndea, Nicolae; Münch, Arnaud
2015-07-01
We introduce a direct method which allows the solving of numerically inverse problems for linear hyperbolic equations. We first consider the reconstruction of the full solution of the equation posed in Ω × (0,T)—Ω being a bounded subset of {{{R}}N}—from a partial distributed observation. We employ a least-squares technique and minimize the L2-norm of the distance from the observation to any solution. Taking the hyperbolic equation as the main constraint of the problem, the optimality conditions are reduced to a mixed formulation involving both the state to reconstruct and a Lagrange multiplier. Under usual geometric optic conditions, we show the well-posedness of this mixed formulation (in particular the inf-sup condition) and then introduce a numerical approximation based on space-time finite element discretization. We prove the strong convergence of the approximation and then discuss several examples for N = 1 and N = 2. The problem of the reconstruction of both the state and the source terms is also addressed.
First integrals for the Kepler problem with linear drag
NASA Astrophysics Data System (ADS)
Margheri, Alessandro; Ortega, Rafael; Rebelo, Carlota
2016-07-01
In this work we consider the Kepler problem with linear drag, and prove the existence of a continuous vector-valued first integral, obtained taking the limit as t→ +∞ of the Runge-Lenz vector. The norm of this first integral can be interpreted as an asymptotic eccentricity e_{∞} with 0≤ e_{∞} ≤ 1 . The orbits satisfying e_{∞} <1 approach the singularity by an elliptic spiral and the corresponding solutions x(t)=r(t)e^{iθ (t)} have a norm r(t) that goes to zero like a negative exponential and an argument θ (t) that goes to infinity like a positive exponential. In particular, the difference between consecutive times of passage through the pericenter, say T_{n+1} -T_n , goes to zero as 1/n.
Using parallel banded linear system solvers in generalized eigenvalue problems
NASA Technical Reports Server (NTRS)
Zhang, Hong; Moss, William F.
1994-01-01
Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speedup is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.
Using parallel banded linear system solvers in generalized eigenvalue problems
NASA Technical Reports Server (NTRS)
Zhang, Hong; Moss, William F.
1993-01-01
Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.
An application of GMRES to indefinite linear problems in meteorology
NASA Astrophysics Data System (ADS)
Navarra, Antonio
1989-05-01
A preliminary investigation of a Krylov subspace method (GMRES) has been performed on a set of representative problems that can be encountered in geophysical fluid dynamics. Though in the majority of the numerical experiments practical convergence was correlated with the confinement of the eigenvalue spectrum to one complex half plane, it appears that there are cases in which this fact may not be enough to guarantee a practical rate of convergence. However, in the cases that did converge, results seem to indicate that convergence of the iterative GMRES can be obtained when the eigenvalues of the linear operator are all confined to a complex half plane (in agreement with Saad and Schultz). Simple shifts and scale selective dissipation are very effective in controlling convergence. A substantial improvement can be achieved by using preconditioning suggested by the physical nature of the problem. It appears that this is the best way to accelerate convergence. Even with preconditioning, however, it remains important that most of the eigenvalues be confined to one half plane.
Complementarity and quantum walks
Kendon, Viv; Sanders, Barry C.
2005-02-01
We show that quantum walks interpolate between a coherent 'wave walk' and a random walk depending on how strongly the walker's coin state is measured; i.e., the quantum walk exhibits the quintessentially quantum property of complementarity, which is manifested as a tradeoff between knowledge of which path the walker takes vs the sharpness of the interference pattern. A physical implementation of a quantum walk (the quantum quincunx) should thus have an identifiable walker and the capacity to demonstrate the interpolation between wave walk and random walk depending on the strength of measurement.
The Intelligence of Dual Simplex Method to Solve Linear Fractional Fuzzy Transportation Problem
Narayanamoorthy, S.; Kalyani, S.
2015-01-01
An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example. PMID:25810713
Status Report: Black Hole Complementarity Controversy
NASA Astrophysics Data System (ADS)
Lee, Bum-Hoon; Yeom, Dong-han
2014-01-01
Black hole complementarity was a consensus among string theorists for the interpretation of the information loss problem. However, recently some authors find inconsistency of black hole complementarity: large N rescaling and Almheiri, Marolf, Polchinski and Sully (AMPS) argument. According to AMPS, the horizon should be a firewall so that one cannot penetrate there for consistency. There are some controversial discussions on the firewall. Apart from these papers, the authors suggest an assertion using a semi-regular black hole model and we conclude that the firewall, if it exists, should affect to asymptotic observer. In addition, if any opinion does not consider the duplication experiment and the large N rescaling, then the argument is difficult to accept.
Multigrid approaches to non-linear diffusion problems on unstructured meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
The efficiency of three multigrid methods for solving highly non-linear diffusion problems on two-dimensional unstructured meshes is examined. The three multigrid methods differ mainly in the manner in which the nonlinearities of the governing equations are handled. These comprise a non-linear full approximation storage (FAS) multigrid method which is used to solve the non-linear equations directly, a linear multigrid method which is used to solve the linear system arising from a Newton linearization of the non-linear system, and a hybrid scheme which is based on a non-linear FAS multigrid scheme, but employs a linear solver on each level as a smoother. Results indicate that all methods are equally effective at converging the non-linear residual in a given number of grid sweeps, but that the linear solver is more efficient in cpu time due to the lower cost of linear versus non-linear grid sweeps.
The complementarity of consciousness.
Jahn, R G
2007-01-01
The concept of complementarity, originally proposed by Bohr in a microphysical context, and subsequently extended by himself, Heisenberg and Pauli to encompass subjective as well as objective dimensions of human experience, can be further expanded to apply to many common attitudes of human consciousness. At issue is the replacement of strict polar opposition of superficially antithetical consciousness capacities, such as analysis and synthesis, logic and intuition, or doing and being, by more generous conjugation that allows the pairs to operate in constructive triangulation and harmony. In this format, the physical principle of uncertainty also acquires metaphoric relevance in limiting the attainable sharpness of specification of any consciousness complements, and may serve to define their optimum balance in establishing reality. These principles thus lend themselves to representation of wave-like vs. particle-like operations of consciousness; to trade-offs between rigor and ambience in consciousness research; to generic masculine/feminine reinforcement; and to the interplay of science and spirit in any creative enterprise.
Problems with the linear q-Fokker Planck equation
NASA Astrophysics Data System (ADS)
Yano, Ryosuke
2015-05-01
In this letter, we discuss the linear q-Fokker Planck equation, whose solution follows Tsallis distribution, from the viewpoint of kinetic theory. Using normal definitions of moments, we can expand the distribution function with infinite moments for 0 ⩽ q < 1, whereas we cannot expand the distribution function with infinite moments for 1 < q owing to emergences of characteristic points in moments. From Grad's 13 moment equations for the linear q-Fokker Planck equation, the dissipation rate of the heat flux via the linear q-Fokker Planck equation diverges at 0 ⩽ q < 2/3. In other words, the thermal conductivity, which defines the heat flux with the spatial gradient of the temperature and the thermal conductivity, which defines the heat flux with the spacial gradient of the density, jumps to zero at q = 2/3, discontinuously.
Fixed Point Problems for Linear Transformations on Pythagorean Triples
ERIC Educational Resources Information Center
Zhan, M.-Q.; Tong, J.-C.; Braza, P.
2006-01-01
In this article, an attempt is made to find all linear transformations that map a standard Pythagorean triple (a Pythagorean triple [x y z][superscript T] with y being even) into a standard Pythagorean triple, which have [3 4 5][superscript T] as their fixed point. All such transformations form a monoid S* under matrix product. It is found that S*…
A linear regularization scheme for inverse problems with unbounded linear operators on Banach spaces
NASA Astrophysics Data System (ADS)
Kohr, Holger
2013-06-01
This paper extends the linear regularization scheme known as the approximate inverse to unbounded linear operators on Banach spaces. The principle of feature reconstruction is adapted from bounded operators to the unbounded scenario and, in addition, a new situation is examined where the data need to be pre-processed to fit into the mathematical model. In all these cases, invariance and regularization properties are surveyed and established for the example of fractional differentiation. Numerical results confirm the derived characteristics of the presented methods.
A Linear Programming Solution to the Faculty Assignment Problem
ERIC Educational Resources Information Center
Breslaw, Jon A.
1976-01-01
Investigates the problem of assigning faculty to courses at a university. A program is developed that is both efficient, in that integer programming is not required, and effective, in that it facilitates interaction by administration in determining the optimal solution. The results of some empirical tests are also reported. (Author)
NASA Technical Reports Server (NTRS)
Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.
1989-01-01
A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient. PMID:27547676
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
NASA Astrophysics Data System (ADS)
Bagué, Anne; Fuster, Daniel; Popinet, Stéphane; Scardovelli, Ruben; Zaleski, Stéphane
2010-09-01
The temporal instability of parallel two-phase mixing layers is studied with a linear stability code by considering a composite error function base flow. The eigenfunctions of the linear problem are used to initialize the velocity and volume fraction fields for direct numerical simulations of the incompressible Navier-Stokes equations with the open-source GERRIS flow solver. We compare the growth rate of the most unstable mode from the linear stability problem and from the simulation results at moderate and large density and viscosity ratios in order to validate the code for a wide range of physical parameters. The efficiency of the adaptive mesh refinement scheme is also discussed.
Towards Resolving the Crab Sigma-Problem: A Linear Accelerator?
NASA Technical Reports Server (NTRS)
Contopoulos, Ioannis; Kazanas, Demosthenes; White, Nicholas E. (Technical Monitor)
2002-01-01
Using the exact solution of the axisymmetric pulsar magnetosphere derived in a previous publication and the conservation laws of the associated MHD flow, we show that the Lorentz factor of the outflowing plasma increases linearly with distance from the light cylinder. Therefore, the ratio of the Poynting to particle energy flux, generically referred to as sigma, decreases inversely proportional to distance, from a large value (typically approx. greater than 10(exp 4)) near the light cylinder to sigma approx. = 1 at a transition distance R(sub trans). Beyond this distance the inertial effects of the outflowing plasma become important and the magnetic field geometry must deviate from the almost monopolar form it attains between R(sub lc), and R(sub trans). We anticipate that this is achieved by collimation of the poloidal field lines toward the rotation axis, ensuring that the magnetic field pressure in the equatorial region will fall-off faster than 1/R(sup 2) (R being the cylindrical radius). This leads both to a value sigma = a(sub s) much less than 1 at the nebular reverse shock at distance R(sub s) (R(sub s) much greater than R(sub trans)) and to a component of the flow perpendicular to the equatorial component, as required by observation. The presence of the strong shock at R = R(sub s) allows for the efficient conversion of kinetic energy into radiation. We speculate that the Crab pulsar is unique in requiring sigma(sub s) approx. = 3 x 10(exp -3) because of its small translational velocity, which allowed for the shock distance R(sub s) to grow to values much greater than R(sub trans).
NASA Astrophysics Data System (ADS)
Tian, Wenyi; Yuan, Xiaoming
2016-11-01
Linear inverse problems with total variation regularization can be reformulated as saddle-point problems; the primal and dual variables of such a saddle-point reformulation can be discretized in piecewise affine and constant finite element spaces, respectively. Thus, the well-developed primal-dual approach (a.k.a. the inexact Uzawa method) is conceptually applicable to such a regularized and discretized model. When the primal-dual approach is applied, the resulting subproblems may be highly nontrivial and it is necessary to discuss how to tackle them and thus make the primal-dual approach implementable. In this paper, we suggest linearizing the data-fidelity quadratic term of the hard subproblems so as to obtain easier ones. A linearized primal-dual method is thus proposed. Inspired by the fact that the linearized primal-dual method can be explained as an application of the proximal point algorithm, a relaxed version of the linearized primal-dual method, which can often accelerate the convergence numerically with the same order of computation, is also proposed. The global convergence and worst-case convergence rate measured by the iteration complexity are established for the new algorithms. Their efficiency is verified by some numerical results.
An application of a linear programing technique to nonlinear minimax problems
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1973-01-01
A differential correction technique for solving nonlinear minimax problems is presented. The basis of the technique is a linear programing algorithm which solves the linear minimax problem. By linearizing the original nonlinear equations about a nominal solution, both nonlinear approximation and estimation problems using the minimax norm may be solved iteratively. Some consideration is also given to improving convergence and to the treatment of problems with more than one measured quantity. A sample problem is treated with this technique and with the least-squares differential correction method to illustrate the properties of the minimax solution. The results indicate that for the sample approximation problem, the minimax technique provides better estimates than the least-squares method if a sufficient amount of data is used. For the sample estimation problem, the minimax estimates are better if the mathematical model is incomplete.
Gene Golub; Kwok Ko
2009-03-30
The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.
Global symmetry relations in linear and viscoplastic mobility problems
NASA Astrophysics Data System (ADS)
Kamrin, Ken; Goddard, Joe
2014-11-01
The mobility tensor of a textured surface is a homogenized effective boundary condition that describes the effective slip of a fluid adjacent to the surface in terms of an applied shear traction far above the surface. In the Newtonian fluid case, perturbation analysis yields a mobility tensor formula, which suggests that regardless of the surface texture (i.e. nonuniform hydrophobicity distribution and/or height fluctuations) the mobility tensor is always symmetric. This conjecture is verified using a Lorentz reciprocity argument. It motivates the question of whether such symmetries would arise for nonlinear constitutive relations and boundary conditions, where the mobility tensor is not a constant but a function of the applied stress. We show that in the case of a strongly dissipative nonlinear constitutive relation--one whose strain-rate relates to the stress solely through a scalar Edelen potential--and strongly dissipative surface boundary conditions--one whose hydrophobic character is described by a potential relating slip to traction--the mobility function of the surface also maintains tensorial symmetry. By extension, the same variational arguments can be applied in problems such as the permeability tensor for viscoplastic flow through porous media, and we find that similar symmetries arise. These findings could be used to simplify the characterization of viscoplastic drag in various anisotropic media. (Joe Goddard is a former graduate student of Acrivos).
Colloidal Assembly via Shape Complementarity
Macfarlane, Robert John; Mirkin, Chad A.
2010-07-15
A simple method for selectively assembling colloidal particles with depletion forces is achieved using the concept of shape complementarity, reminiscent of Fischer's “lock and key” enzyme model. A spherical particle can fit inside a second particle with an indentation of similar size and shape, allowing access to a large variety of assembled structures.
Zörnig, Peter
2015-08-01
We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.
A New Bound for the Ration Between the 2-Matching Problem and Its Linear Programming Relaxation
Boyd, Sylvia; Carr, Robert
1999-07-28
Consider the 2-matching problem defined on the complete graph, with edge costs which satisfy the triangle inequality. We prove that the value of a minimum cost 2-matching is bounded above by 4/3 times the value of its linear programming relaxation, the fractional 2-matching problem. This lends credibility to a long-standing conjecture that the optimal value for the traveling salesman problem is bounded above by 4/3 times the value of its linear programming relaxation, the subtour elimination problem.
EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.
ERIC Educational Resources Information Center
Jarvis, John J.; And Others
Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…
Bramble, J.H.; Pasciak, J.E.
1981-01-01
The linearized scalar potential formulation of the magnetostatic field problem is considered. The approach involves a reformulation of the continuous problem as a parametric boundary problem. By the introduction of a spherical interface and the use of spherical harmonics, the infinite boundary condition can also be satisfied in the parametric framework. The reformulated problem is discretized by finite element techniques and a discrete parametric problem is solved by conjugate gradient iteration. This approach decouples the problem in that only standard Neumann type elliptic finite element systems on separate bounded domains need be solved. The boundary conditions at infinity and the interface conditions are satisfied during the boundary parametric iteration.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1986-01-01
An abstract approximation framework is developed for the finite and infinite time horizon discrete-time linear-quadratic regulator problem for systems whose state dynamics are described by a linear semigroup of operators on an infinite dimensional Hilbert space. The schemes included the framework yield finite dimensional approximations to the linear state feedback gains which determine the optimal control law. Convergence arguments are given. Examples involving hereditary and parabolic systems and the vibration of a flexible beam are considered. Spline-based finite element schemes for these classes of problems, together with numerical results, are presented and discussed.
Aspects of complementarity and uncertainty
NASA Astrophysics Data System (ADS)
Vathsan, Radhika; Qureshi, Tabish
2016-08-01
The two-slit experiment with quantum particles provides many insights into the behavior of quantum mechanics, including Bohr’s complementarity principle. Here, we analyze Einstein’s recoiling slit version of the experiment and show how the inevitable entanglement between the particle and the recoiling slit as a which-way detector is responsible for complementarity. We derive the Englert-Greenberger-Yasin duality from this entanglement, which can also be thought of as a consequence of sum-uncertainty relations between certain complementary observables of the recoiling slit. Thus, entanglement is an integral part of the which-way detection process, and so is uncertainty, though in a completely different way from that envisaged by Bohr and Einstein.
ERIC Educational Resources Information Center
Acevedo Nistal, Ana; Van Dooren, Wim; Verschaffel, Lieven
2013-01-01
Thirty-six secondary school students aged 14-16 were interviewed while they chose between a table, a graph or a formula to solve three linear function problems. The justifications for their choices were classified as (1) task-related if they explicitly mentioned the to-be-solved problem, (2) subject-related if students mentioned their own…
Illusion of Linearity in Geometry: Effect in Multiple-Choice Problems
ERIC Educational Resources Information Center
Vlahovic-Stetic, Vesna; Pavlin-Bernardic, Nina; Rajter, Miroslav
2010-01-01
The aim of this study was to examine if there is a difference in the performance on non-linear problems regarding age, gender, and solving situation, and whether the multiple-choice answer format influences students' thinking. A total of 112 students, aged 15-16 and 18-19, were asked to solve problems for which solutions based on proportionality…
Digital program for solving the linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, B.
1975-01-01
A computer program is described which solves the linear stochastic optimal control and estimation (LSOCE) problem by using a time-domain formulation. The LSOCE problem is defined as that of designing controls for a linear time-invariant system which is disturbed by white noise in such a way as to minimize a performance index which is quadratic in state and control variables. The LSOCE problem and solution are outlined; brief descriptions are given of the solution algorithms, and complete descriptions of each subroutine, including usage information and digital listings, are provided. A test case is included, as well as information on the IBM 7090-7094 DCS time and storage requirements.
NASA Astrophysics Data System (ADS)
Schröder, Jörg; Keip, Marc-André
2012-08-01
The contribution addresses a direct micro-macro transition procedure for electromechanically coupled boundary value problems. The two-scale homogenization approach is implemented into a so-called FE2-method which allows for the computation of macroscopic boundary value problems in consideration of microscopic representative volume elements. The resulting formulation is applicable to the computation of linear as well as nonlinear problems. In the present paper, linear piezoelectric as well as nonlinear electrostrictive material behavior are investigated, where the constitutive equations on the microscale are derived from suitable thermodynamic potentials. The proposed direct homogenization procedure can also be applied for the computation of effective elastic, piezoelectric, dielectric, and electrostrictive material properties.
On high-continuity transfinite element formulations for linear-nonlinear transient thermal problems
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Railkar, Sudhir B.
1987-01-01
This paper describes recent developments in the applicability of a hybrid transfinite element methodology with emphasis on high-continuity formulations for linear/nonlinear transient thermal problems. The proposed concepts furnish accurate temperature distributions and temperature gradients making use of a relatively smaller number of degrees of freedom; and the methodology is applicable to linear/nonlinear thermal problems. Characteristic features of the formulations are described in technical detail as the proposed hybrid approach combines the major advantages and modeling features of high-continuity thermal finite elements in conjunction with transform methods and classical Galerkin schemes. Several numerical test problems are evaluated and the results obtained validate the proposed concepts for linear/nonlinear thermal problems.
Newton's method for large bound-constrained optimization problems.
Lin, C.-J.; More, J. J.; Mathematics and Computer Science
1999-01-01
We analyze a trust region version of Newton's method for bound-constrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearly constrained problems and yields global and superlinear convergence without assuming either strict complementarity or linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large bound-constrained problems.
Some comparison of restarted GMRES and QMR for linear and nonlinear problems
Morgan, R.; Joubert, W.
1994-12-31
Comparisons are made between the following methods: QMR including its transpose-free version, restarted GMRES, and a modified restarted GMRES that uses approximate eigenvectors to improve convergence, For some problems, the modified GMRES is competitive with or better than QMR in terms of the number of matrix-vector products. Also, the GMRES methods can be much better when several similar systems of linear equations must be solved, as in the case of nonlinear problems and ODE problems.
Averaging and Linear Programming in Some Singularly Perturbed Problems of Optimal Control
Gaitsgory, Vladimir; Rossomakhine, Sergey
2015-04-15
The paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed (SP) optimal controls problems (that is, problems of optimal control of SP systems) considered on the infinite time horizon. We mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well. Our consideration is based on earlier results on averaging of SP control systems and on linear programming formulations of optimal control problems. The idea that we exploit is to first asymptotically approximate a given problem of optimal control of the SP system by a certain averaged optimal control problem, then reformulate this averaged problem as an infinite-dimensional linear programming (LP) problem, and then approximate the latter by semi-infinite LP problems. We show that the optimal solution of these semi-infinite LP problems and their duals (that can be found with the help of a modification of an available LP software) allow one to construct near optimal controls of the SP system. We demonstrate the construction with two numerical examples.
A strictly improving linear programming alorithm based on a series of Phase 1 problems
Leichner, S.A.; Dantzig, G.B.; Davis, J.W.
1992-04-01
When used on degenerate problems, the simplex method often takes a number of degenerate steps at a particular vertex before moving to the next. In theory (although rarely in practice), the simplex method can actually cycle at such a degenerate point. Instead of trying to modify the simplex method to avoid degenerate steps, we have developed a new linear programming algorithm that is completely impervious to degeneracy. This new method solves the Phase II problem of finding an optimal solution by solving a series of Phase I feasibility problems. Strict improvement is attained at each iteration in the Phase I algorithm, and the Phase II sequence of feasibility problems has linear convergence in the number of Phase I problems. When tested on the 30 smallest NETLIB linear programming test problems, the computational results for the new Phase II algorithm were over 15% faster than the simplex method; on some problems, it was almost two times faster, and on one problem it was four times faster.
On Development of a Problem Based Learning System for Linear Algebra with Simple Input Method
NASA Astrophysics Data System (ADS)
Yokota, Hisashi
2011-08-01
Learning how to express a matrix using a keyboard inputs requires a lot of time for most of college students. Therefore, for a problem based learning system for linear algebra to be accessible for college students, it is inevitable to develop a simple method for expressing matrices. Studying the two most widely used input methods for expressing matrices, a simpler input method for expressing matrices is obtained. Furthermore, using this input method and educator's knowledge structure as a concept map, a problem based learning system for linear algebra which is capable of assessing students' knowledge structure and skill is developed.
Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem
Yoo, Jaechil
1996-12-31
Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.
Weighted linear least squares problem: an interval analysis approach to rank determination
Manteuffel, T. A.
1980-08-01
This is an extension of the work in SAND--80-0655 to the weighted linear least squares problem. Given the weighted linear least squares problem WAx approx. = Wb, where W is a diagonal weighting matrix, and bounds on the uncertainty in the elements of A, we define an interval matrix A/sup I/ that contains all perturbations of A due to these uncertainties and say that the problem is rank deficient if any member of A/sup I/ is rank deficient. It is shown that, if WA = QR is the QR decomposition of WA, then Q and R/sup -1/ can be used to bound the rank of A/sup I/. A modification of the Modified Gram--Schmidt QR decomposition yields an algorithm that implements these results. The extra arithmetic is 0(MN). Numerical results show the algorithm to be effective on problems in which the weights vary greatly in magnitude.
NASA Astrophysics Data System (ADS)
Koleva, M. N.
2007-10-01
We consider stationary linear and nonlinear problems on non-connected layers with distinct material properties. A version of the finite element method (FEM) is used for discretization of the continuous problems. We formulate sufficient conditions under which we prove the discrete maximum principle and convergence of the numerical higher-order finite elements solution. Efficient algorithm for solution of the FEM algebraic equations is proposed. Numerical experiments are also discussed.
A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints
NASA Technical Reports Server (NTRS)
Hanson, R. J.; Krogh, Fred T.
1992-01-01
A new algorithm for solving nonlinear least-squares and nonlinear equation problems is proposed which is based on approximating the nonlinear functions using the quadratic-tensor model by Schnabel and Frank. The algorithm uses a trust region defined by a box containing the current values of the unknowns. The algorithm is found to be effective for problems with linear constraints and dense Jacobian matrices.
NASA Astrophysics Data System (ADS)
Sommariva, Sara; Sorrentino, Alberto
2014-11-01
We discuss the use of a recent class of sequential Monte Carlo methods for solving inverse problems characterized by a semi-linear structure, i.e. where the data depend linearly on a subset of variables and nonlinearly on the remaining ones. In this type of problems, under proper Gaussian assumptions one can marginalize the linear variables. This means that the Monte Carlo procedure needs only to be applied to the nonlinear variables, while the linear ones can be treated analytically; as a result, the Monte Carlo variance and/or the computational cost decrease. We use this approach to solve the inverse problem of magnetoencephalography, with a multi-dipole model for the sources. Here, data depend nonlinearly on the number of sources and their locations, and depend linearly on their current vectors. The semi-analytic approach enables us to estimate the number of dipoles and their location from a whole time-series, rather than a single time point, while keeping a low computational cost.
NASA Technical Reports Server (NTRS)
Bauld, N. R., Jr.; Goree, J. G.
1983-01-01
The accuracy of the finite difference method in the solution of linear elasticity problems that involve either a stress discontinuity or a stress singularity is considered. Solutions to three elasticity problems are discussed in detail: a semi-infinite plane subjected to a uniform load over a portion of its boundary; a bimetallic plate under uniform tensile stress; and a long, midplane symmetric, fiber reinforced laminate subjected to uniform axial strain. Finite difference solutions to the three problems are compared with finite element solutions to corresponding problems. For the first problem a comparison with the exact solution is also made. The finite difference formulations for the three problems are based on second order finite difference formulas that provide for variable spacings in two perpendicular directions. Forward and backward difference formulas are used near boundaries where their use eliminates the need for fictitious grid points.
Improving Students' Representational Flexibility in Linear-Function Problems: An Intervention
ERIC Educational Resources Information Center
Acevedo Nistal, A.; Van Dooren, W.; Verschaffel, L.
2014-01-01
This study evaluates the effects of an intervention aimed at improving representational flexibility in linear-function problems. Forty-nine students aged 13-16 participated in the study. A pretest-intervention-posttest design with an experimental and control group was used. At pretest, both groups solved a choice test, where they could freely…
The problem of scheduling for the linear section of a single-track railway
NASA Astrophysics Data System (ADS)
Akimova, Elena N.; Gainanov, Damir N.; Golubev, Oleg A.; Kolmogortsev, Ilya D.; Konygin, Anton V.
2016-06-01
The paper is devoted to the problem of scheduling for the linear section of a single-track railway: how to organize the flow in both directions in the most efficient way. In this paper, the authors propose an algorithm for scheduling, examine the properties of this algorithm and perform the computational experiments.
High Order Finite Difference Methods, Multidimensional Linear Problems and Curvilinear Coordinates
NASA Technical Reports Server (NTRS)
Nordstrom, Jan; Carpenter, Mark H.
1999-01-01
Boundary and interface conditions are derived for high order finite difference methods applied to multidimensional linear problems in curvilinear coordinates. The boundary and interface conditions lead to conservative schemes and strict and strong stability provided that certain metric conditions are met.
Linear Integro-differential Schroedinger and Plate Problems Without Initial Conditions
Lorenzi, Alfredo
2013-06-15
Via Carleman's estimates we prove uniqueness and continuous dependence results for the temporal traces of solutions to overdetermined linear ill-posed problems related to Schroedinger and plate equation. The overdetermination is prescribed in an open subset of the (space-time) lateral boundary.
Complementarity and Symmetry in Family Therapy Communication.
ERIC Educational Resources Information Center
Heatherington, Laurie; Friedlander, Myrna L.
1990-01-01
Examined relational control communication patterns in systemic family therapy sessions. Results from 29 families showed significantly more complementarity than symmetry. Neither complementarity nor symmetry was predictive of family members' perceptions of the therapeutic alliance as measured by Couple and Family Therapy Alliance Scales. (Author/NB)
NASA Technical Reports Server (NTRS)
Kent, James; Holdaway, Daniel
2015-01-01
A number of geophysical applications require the use of the linearized version of the full model. One such example is in numerical weather prediction, where the tangent linear and adjoint versions of the atmospheric model are required for the 4DVAR inverse problem. The part of the model that represents the resolved scale processes of the atmosphere is known as the dynamical core. Advection, or transport, is performed by the dynamical core. It is a central process in many geophysical applications and is a process that often has a quasi-linear underlying behavior. However, over the decades since the advent of numerical modelling, significant effort has gone into developing many flavors of high-order, shape preserving, nonoscillatory, positive definite advection schemes. These schemes are excellent in terms of transporting the quantities of interest in the dynamical core, but they introduce nonlinearity through the use of nonlinear limiters. The linearity of the transport schemes used in Goddard Earth Observing System version 5 (GEOS-5), as well as a number of other schemes, is analyzed using a simple 1D setup. The linearized version of GEOS-5 is then tested using a linear third order scheme in the tangent linear version.
Geometric tools for solving the FDI problem for linear periodic discrete-time systems
NASA Astrophysics Data System (ADS)
Longhi, Sauro; Monteriù, Andrea
2013-07-01
This paper studies the problem of detecting and isolating faults in linear periodic discrete-time systems. The aim is to design an observer-based residual generator where each residual is sensitive to one fault, whilst remaining insensitive to the other faults that can affect the system. Making use of the geometric tools, and in particular of the outer observable subspace notion, the Fault Detection and Isolation (FDI) problem is formulated and necessary and solvability conditions are given. An algorithmic procedure is described to determine the solution of the FDI problem.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2014-04-01
The typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover problem, which is a type of integer programming (IP) problem. To deal with LP and IP using statistical mechanics, a lattice-gas model on the Erdös-Rényi random graphs is analyzed by a replica method. It is found that the LP optimal solution is typically equal to that given by IP below the critical average degree c*=e in the thermodynamic limit. The critical threshold for LP = IP extends the previous result c = 1, and coincides with the replica symmetry-breaking threshold of the IP.
The linearized characteristics method and its application to practical nonlinear supersonic problems
NASA Technical Reports Server (NTRS)
Ferri, Antonio
1952-01-01
The methods of characteristics has been linearized by assuming that the flow field can be represented as a basic flow field determined by nonlinearized methods and a linearized superposed flow field that accounts for small changes of boundary conditions. The method has been applied to two-dimensional rotational flow where the basic flow is potential flow and to axially symmetric problems where conical flows have been used as the basic flows. In both cases the method allows the determination of the flow field to be simplified and the numerical work to be reduced to a few calculations. The calculations of axially symmetric flow can be simplified if tabulated values of some coefficients of the conical flow are obtained. The method has also been applied to slender bodies without symmetry and to some three-dimensional wing problems where two-dimensional flow can be used as the basic flow. Both problems were unsolved before in the approximation of nonlinear flow.
Voila: A visual object-oriented iterative linear algebra problem solving environment
Edwards, H.C.; Hayes, L.J.
1994-12-31
Application of iterative methods to solve a large linear system of equations currently involves writing a program which calls iterative method subprograms from a large software package. These subprograms have complex interfaces which are difficult to use and even more difficult to program. A problem solving environment specifically tailored to the development and application of iterative methods is needed. This need will be fulfilled by Voila, a problem solving environment which provides a visual programming interface to object-oriented iterative linear algebra kernels. Voila will provide several quantum improvements over current iterative method problem solving environments. First, programming and applying iterative methods is considerably simplified through Voila`s visual programming interface. Second, iterative method algorithm implementations are independent of any particular sparse matrix data structure through Voila`s object-oriented kernels. Third, the compile-link-debug process is eliminated as Voila operates as an interpreter.
Black hole complementarity and firewall in two dimensions
NASA Astrophysics Data System (ADS)
Kim, Wontae; Lee, Bum-Hoon; Yeom, Dong-han
2013-05-01
In connection with black hole complementarity, we study the possibility of the duplication of information in the RST model which is an exactly soluble quantized model in two dimensions. We find that the duplication of information can be observed without resort to assuming an excessively large number of scalar fields. If we introduce a firewall, then we can circumvent this problem; however, the firewall should be outside the event horizon.
ZELINSKI, ADAM C.; GOYAL, VIVEK K.; ADALSTEINSSON, ELFAR
2010-01-01
A problem that arises in slice-selective magnetic resonance imaging (MRI) radio-frequency (RF) excitation pulse design is abstracted as a novel linear inverse problem with a simultaneous sparsity constraint. Multiple unknown signal vectors are to be determined, where each passes through a different system matrix and the results are added to yield a single observation vector. Given the matrices and lone observation, the objective is to find a simultaneously sparse set of unknown vectors that approximately solves the system. We refer to this as the multiple-system single-output (MSSO) simultaneous sparse approximation problem. This manuscript contrasts the MSSO problem with other simultaneous sparsity problems and conducts an initial exploration of algorithms with which to solve it. Greedy algorithms and techniques based on convex relaxation are derived and compared empirically. Experiments involve sparsity pattern recovery in noiseless and noisy settings and MRI RF pulse design. PMID:20445814
Mitter, S.K.
1980-06-01
The main thesis of this paper is that there are striking similarities between the mathematical problems of stochastic system theory, notably linear and non-linear filtering theory, and mathematical developments underlying quantum mechanics and quantum field theory. Thus the mathematical developments of the past thirty years in functional analysis, lie groups and lie algebras, group representations, and probabilistic methods of quantum theory can serve as a guide and indicator to search for an appropriate theory of stochastic systems. In the current state of development of linear and non-linear filtering theory, it is best to proceed by 'analogy' and with care, since 'unitarity' which plays such an important part in quantum mechanics and quantum field theory is not necessarily relevant to linear and non-linear filtering theory. The partial differential equations that arise in quantum theory are generally wave equations, whereas the partial differential equations arising in filtering theory are stochastic parabolic equations. Nevertheless the possibility of passing to a wave equation by appropriate analytic continuation from the parabolic equation, reminiscent of the current program in euclidean field theory, should not be overlooked.
Linear Stability of Elliptic Lagrangian Solutions of the Planar Three-Body Problem via Index Theory
NASA Astrophysics Data System (ADS)
Hu, Xijun; Long, Yiming; Sun, Shanzhong
2014-09-01
It is well known that the linear stability of Lagrangian elliptic equilateral triangle homographic solutions in the classical planar three-body problem depends on the mass parameter and the eccentricity . We are not aware of any existing analytical method which relates the linear stability of these solutions to the two parameters directly in the full rectangle [0, 9] × [0, 1), aside from perturbation methods for e > 0 small enough, blow-up techniques for e sufficiently close to 1, and numerical studies. In this paper, we introduce a new rigorous analytical method to study the linear stability of these solutions in terms of the two parameters in the full ( β, e) range [0, 9] × [0, 1) via the ω-index theory of symplectic paths for ω belonging to the unit circle of the complex plane, and the theory of linear operators. After establishing the ω-index decreasing property of the solutions in β for fixed , we prove the existence of three curves located from left to right in the rectangle [0, 9] × [0, 1), among which two are -1 degeneracy curves and the third one is the right envelope curve of the ω-degeneracy curves, and show that the linear stability pattern of such elliptic Lagrangian solutions changes if and only if the parameter ( β, e) passes through each of these three curves. Interesting symmetries of these curves are also observed. The linear stability of the singular case when the eccentricity e approaches 1 is also analyzed in detail.
Stable computation of search directions for near-degenerate linear programming problems
Hough, P.D.
1997-03-01
In this paper, we examine stability issues that arise when computing search directions ({delta}x, {delta}y, {delta} s) for a primal-dual path-following interior point method for linear programming. The dual step {delta}y can be obtained by solving a weighted least-squares problem for which the weight matrix becomes extremely il conditioned near the boundary of the feasible region. Hough and Vavisis proposed using a type of complete orthogonal decomposition (the COD algorithm) to solve such a problem and presented stability results. The work presented here addresses the stable computation of the primal step {delta}x and the change in the dual slacks {delta}s. These directions can be obtained in a straight-forward manner, but near-degeneracy in the linear programming instance introduces ill-conditioning which can cause numerical problems in this approach. Therefore, we propose a new method of computing {delta}x and {delta}s. More specifically, this paper describes and orthogonal projection algorithm that extends the COD method. Unlike other algorithms, this method is stable for interior point methods without assuming nondegeneracy in the linear programming instance. Thus, it is more general than other algorithms on near-degenerate problems.
Three-dimensional theory of water impact. Part 2. Linearized Wagner problem
NASA Astrophysics Data System (ADS)
Korobkin, A. A.; Scolan, Y.-M.
The three-dimensional problem of blunt-body impact onto a free surface of an ideal and incompressible liquid is considered within the Wagner approximation. This approximation is formally valid during an initial stage, when the depth of penetration is small, the wetted part of the body can be approximately replaced with a flat disk and the boundary conditions can be linearized and imposed on the undisturbed liquid surface. In the present context this problem will be referred to as the classical Wagner problem. However the classical Wagner problem of impact is nonlinear despite the fact that the equations of liquid motion and boundary conditions are linearized. The reason is that the contact region between the liquid and the entering body is unknown in advance and has to be determined together with the liquid flow. Several exact solutions of the three-dimensional Wagner problem are known as detailed in Part 1 (J. Fluid Mech. vol. 440, 2001, p. 293). Among these solutions the axisymmetric one is the simplest. In this paper, an additional linearization of the Wagner problem is considered. This linearization is performed on the basis of an axisymmetric solution via a perturbation technique. The small parameter ɛ is a measure of the discrepancy of the actual shape with respect to the closest axisymmetric shape. The method of solution of this problem is detailed here. The resulting solutions are compared to available exact solutions. Three shapes are studied: elliptic paraboloid; inclined cone; and pyramid. These shapes must be blunt in the vicinity of the initial contact point and hence only small deadrise angles can be considered. The stability of the obtained solutions is analysed. The second-order solution of the present Wagner problem with respect to ɛ is considered. That yields the leading-order correction to the hydrodynamic force which acts on an almost axisymmetric body entering liquid vertically. Other nonlinearities are not accounted for. Among them, there
NASA Astrophysics Data System (ADS)
Kent, James; Holdaway, Daniel
2015-04-01
Data assimilation is one of the most common inverse problems encountered in geophysical models. One of the leading techniques used for data assimilation in numerical weather prediction is four dimensional variational data assimilation (4DVAR). In 4DVAR the tangent linear and adjoint versions of the nonlinear model are used to perform a minimization with time dependent observations. In order for the minimization to perform well requires a certain degree of linearity in both the underlying equations and numerical methods used to solve them. Advection is central to the underlying equations used for numerical weather prediction, as well as many other geophysical models. From the advection of momentum, temperature and moisture to passive tracers such as smoke from wildfires, accurate transport is paramount. Over recent decades much effort has been directed toward the development of positive definite, non-oscillatory, mass conserving advection schemes. These schemes are capable of giving excellent representation of transport, but by definition introduce nonlinearity into equations that are otherwise quite linear. One such example is the flux limited piecewise parabolic method (PPM) used in NASA's Goddard Earth Observing System version 5 (GEOS-5), which can perform very poorly when linearized. With a view to an optimal representation of transport in the linear versions of atmospheric models and 4DVAR we analyse the performance of a number of different linear and nonlinear advection schemes. The schemes are analysed using a one dimensional case study, a passive tracer in GEOS-5 experiment and using the full linearized version of GEOS-5. Using the three studies it is shown that higher order linear schemes provide the best representation of the transport of perturbations and sensitivities. In certain situations the nonlinear schemes give the best performance but are subject to issues. It is also shown that many of the desirable properties of the nonlinear schemes are
NASA Astrophysics Data System (ADS)
Vasant, P.; Ganesan, T.; Elamvazuthi, I.
2012-11-01
A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.
Coelho, Clarimar José; Galvão, Roberto K H; de Araújo, Mário César U; Pimentel, Maria Fernanda; da Silva, Edvan Cirino
2003-01-01
A novel strategy for the optimization of wavelet transforms with respect to the statistics of the data set in multivariate calibration problems is proposed. The optimization follows a linear semi-infinite programming formulation, which does not display local maxima problems and can be reproducibly solved with modest computational effort. After the optimization, a variable selection algorithm is employed to choose a subset of wavelet coefficients with minimal collinearity. The selection allows the building of a calibration model by direct multiple linear regression on the wavelet coefficients. In an illustrative application involving the simultaneous determination of Mn, Mo, Cr, Ni, and Fe in steel samples by ICP-AES, the proposed strategy yielded more accurate predictions than PCR, PLS, and nonoptimized wavelet regression. PMID:12767151
Solution of second order quasi-linear boundary value problems by a wavelet method
Zhang, Lei; Zhou, Youhe; Wang, Jizeng
2015-03-10
A wavelet Galerkin method based on expansions of Coiflet-like scaling function bases is applied to solve second order quasi-linear boundary value problems which represent a class of typical nonlinear differential equations. Two types of typical engineering problems are selected as test examples: one is about nonlinear heat conduction and the other is on bending of elastic beams. Numerical results are obtained by the proposed wavelet method. Through comparing to relevant analytical solutions as well as solutions obtained by other methods, we find that the method shows better efficiency and accuracy than several others, and the rate of convergence can even reach orders of 5.8.
Lorber, A.A.; Carey, G.F.; Bova, S.W.; Harle, C.H.
1996-12-31
The connection between the solution of linear systems of equations by iterative methods and explicit time stepping techniques is used to accelerate to steady state the solution of ODE systems arising from discretized PDEs which may involve either physical or artificial transient terms. Specifically, a class of Runge-Kutta (RK) time integration schemes with extended stability domains has been used to develop recursion formulas which lead to accelerated iterative performance. The coefficients for the RK schemes are chosen based on the theory of Chebyshev iteration polynomials in conjunction with a local linear stability analysis. We refer to these schemes as Chebyshev Parameterized Runge Kutta (CPRK) methods. CPRK methods of one to four stages are derived as functions of the parameters which describe an ellipse {Epsilon} which the stability domain of the methods is known to contain. Of particular interest are two-stage, first-order CPRK and four-stage, first-order methods. It is found that the former method can be identified with any two-stage RK method through the correct choice of parameters. The latter method is found to have a wide range of stability domains, with a maximum extension of 32 along the real axis. Recursion performance results are presented below for a model linear convection-diffusion problem as well as non-linear fluid flow problems discretized by both finite-difference and finite-element methods.
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Teglas, Russell
1987-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
A direct analytical approach for solving linear inverse heat conduction problems
NASA Astrophysics Data System (ADS)
Ainajem, N. M.; Ozisik, M. N.
1985-08-01
The analytical approach presented for the solution of linear inverse heat conduction problems demonstrates that applied surface conditions involving abrupt changes with time can be effectively accommodated with polynomial representations in time over the entire time domain; the resulting inverse analysis predicts surface conditions accurately. All previous attempts have experienced difficulties in the development of analytic solutions that are applicable over the entire time domain when a polynomial representation is used.
NASA Astrophysics Data System (ADS)
Mancini, G.
2002-02-01
Based on a recently published efficient, exact algorithm to solve the ring perception problem, a new approach is presented to feed the linear independence test on rings to enter a minimal basis with no duplicate information, thus reducing calls to the most demanding procedure in terms of computational order. The efficiency of a perfect hashing algorithm is actually met by a "pre-filtering" method derived from simple considerations.
Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report
Saad, Yousef
2014-01-16
The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the
Chen, G; de Figueiredo, R P
1993-01-01
The unified approach to optimal image interpolation problems presented provides a constructive procedure for finding explicit and closed-form optimal solutions to image interpolation problems when the type of interpolation can be either spatial or temporal-spatial. The unknown image is reconstructed from a finite set of sampled data in such a way that a mean-square error is minimized by first expressing the solution in terms of the reproducing kernel of a related Hilbert space, and then constructing this kernel using the fundamental solution of an induced linear partial differential equation, or the Green's function of the corresponding self-adjoint operator. It is proved that in most cases, closed-form fundamental solutions (or Green's functions) for the corresponding linear partial differential operators can be found in the general image reconstruction problem described by a first- or second-order linear partial differential operator. An efficient method for obtaining the corresponding closed-form fundamental solutions (or Green's functions) of the operators is presented. A computer simulation demonstrates the reconstruction procedure.
Reintroducing the Concept of Complementarity into Psychology.
Wang, Zheng; Busemeyer, Jerome
2015-01-01
Central to quantum theory is the concept of complementarity. In this essay, we argue that complementarity is also central to the emerging field of quantum cognition. We review the concept, its historical roots in psychology, and its development in quantum physics and offer examples of how it can be used to understand human cognition. The concept of complementarity provides a valuable and fresh perspective for organizing human cognitive phenomena and for understanding the nature of measurements in psychology. In turn, psychology can provide valuable new evidence and theoretical ideas to enrich this important scientific concept.
Reintroducing the Concept of Complementarity into Psychology
Wang, Zheng; Busemeyer, Jerome
2015-01-01
Central to quantum theory is the concept of complementarity. In this essay, we argue that complementarity is also central to the emerging field of quantum cognition. We review the concept, its historical roots in psychology, and its development in quantum physics and offer examples of how it can be used to understand human cognition. The concept of complementarity provides a valuable and fresh perspective for organizing human cognitive phenomena and for understanding the nature of measurements in psychology. In turn, psychology can provide valuable new evidence and theoretical ideas to enrich this important scientific concept. PMID:26640454
Bohrian Complementarity in the Light of Kantian Teleology
NASA Astrophysics Data System (ADS)
Pringe, Hernán
2014-03-01
The Kantian influences on Bohr's thought and the relationship between the perspective of complementarity in physics and in biology seem at first sight completely unrelated issues. However, the goal of this work is to show their intimate connection. We shall see that Bohr's views on biology shed light on Kantian elements of his thought, which enables a better understanding of his complementary interpretation of quantum theory. For this purpose, we shall begin by discussing Bohr's views on the analogies concerning the epistemological situation in biology and in physics. Later, we shall compare the Bohrian and the Kantian approaches to the science of life in order to show their close connection. On this basis, we shall finally turn to the issue of complementarity in quantum theory in order to assess what we can learn about the epistemological problems in the quantum realm from a consideration of Kant's views on teleology.
NASA Astrophysics Data System (ADS)
Korpusov, M. O.; Panin, A. A.
2014-10-01
We consider an abstract Cauchy problem for a formally hyperbolic equation with double non-linearity. Under certain conditions on the operators in the equation, we prove its local (in time) solubility and give sufficient conditions for finite-time blow-up of solutions of the corresponding abstract Cauchy problem. The proof uses a modification of a method of Levine. We give examples of Cauchy problems and initial-boundary value problems for concrete non-linear equations of mathematical physics.
A new gradient-based neural network for solving linear and quadratic programming problems.
Leung, Y; Chen, K Z; Jiao, Y C; Gao, X B; Leung, K S
2001-01-01
A new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality constraints are properly handled. The simulation results show that the proposed neural network is feasible and efficient.
A Linear Time Algorithm for the Minimum Spanning Caterpillar Problem for Bounded Treewidth Graphs
NASA Astrophysics Data System (ADS)
Dinneen, Michael J.; Khosravani, Masoud
We consider the Minimum Spanning Caterpillar Problem (MSCP) in a graph where each edge has two costs, spine (path) cost and leaf cost, depending on whether it is used as a spine or a leaf edge. The goal is to find a spanning caterpillar in which the sum of its edge costs is the minimum. We show that the problem has a linear time algorithm when a tree decomposition of the graph is given as part of the input. Despite the fast growing constant factor of the time complexity of our algorithm, it is still practical and efficient for some classes of graphs, such as outerplanar, series-parallel (K 4 minor-free), and Halin graphs. We also briefly explain how one can modify our algorithm to solve the Minimum Spanning Ring Star and the Dual Cost Minimum Spanning Tree Problems.
Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems.
Choi, Sou-Cheng T; Saunders, Michael A
2014-02-01
We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP. PMID:25328255
Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems.
Choi, Sou-Cheng T; Saunders, Michael A
2014-02-01
We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP.
IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1994-01-01
IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.
Experimental test of universal complementarity relations.
Weston, Morgan M; Hall, Michael J W; Palsson, Matthew S; Wiseman, Howard M; Pryde, Geoff J
2013-05-31
Complementarity restricts the accuracy with which incompatible quantum observables can be jointly measured. Despite popular conception, the Heisenberg uncertainty relation does not quantify this principle. We report the experimental verification of universally valid complementarity relations, including an improved relation derived here. We exploit Einstein-Poldolsky-Rosen correlations between two photonic qubits to jointly measure incompatible observables of one. The product of our measurement inaccuracies is low enough to violate the widely used, but not universally valid, Arthurs-Kelly relation.
Boundary parametric approximation to the linearized scalar potential magnetostatic field problem
Bramble, J.H.; Pasciak, J.E.
1984-01-01
We consider the linearized scalar potential formulation of the magnetostatic field problem in this paper. Our approach involves a reformulation of the continuous problem as a parametric boundary problem. By the introduction of a spherical interface and the use of spherical harmonics, the infinite boundary conditions can also be satisfied in the parametric framework. That is, the field in the exterior of a sphere is expanded in a harmonic series of eigenfunctions for the exterior harmonic problem. The approach is essentially a finite element method coupled with a spectral method via a boundary parametric procedure. The reformulated problem is discretized by finite element techniques which lead to a discrete parametric problem which can be solved by well conditioned iteration involving only the solution of decoupled Neumann type elliptic finite element systems and L/sup 2/ projection onto subspaces of spherical harmonics. Error and stability estimates given show exponential convergence in the degree of the spherical harmonics and optimal order convergence with respect to the finite element approximation for the resulting fields in L/sup 2/. 24 references.
Kew, William; Mitchell, John B O
2015-09-01
The application of Machine Learning to cheminformatics is a large and active field of research, but there exist few papers which discuss whether ensembles of different Machine Learning methods can improve upon the performance of their component methodologies. Here we investigated a variety of methods, including kernel-based, tree, linear, neural networks, and both greedy and linear ensemble methods. These were all tested against a standardised methodology for regression with data relevant to the pharmaceutical development process. This investigation focused on QSPR problems within drug-like chemical space. We aimed to investigate which methods perform best, and how the 'wisdom of crowds' principle can be applied to ensemble predictors. It was found that no single method performs best for all problems, but that a dynamic, well-structured ensemble predictor would perform very well across the board, usually providing an improvement in performance over the best single method. Its use of weighting factors allows the greedy ensemble to acquire a bigger contribution from the better performing models, and this helps the greedy ensemble generally to outperform the simpler linear ensemble. Choice of data preprocessing methodology was found to be crucial to performance of each method too.
Complementarity relations for quantum coherence
NASA Astrophysics Data System (ADS)
Cheng, Shuming; Hall, Michael J. W.
2015-10-01
Various measures have been suggested recently for quantifying the coherence of a quantum state with respect to a given basis. We first use two of these, the l1-norm and relative entropy measures, to investigate tradeoffs between the coherences of mutually unbiased bases. Results include relations between coherence, uncertainty, and purity; tight general bounds restricting the coherences of mutually unbiased bases; and an exact complementarity relation for qubit coherences. We further define the average coherence of a quantum state. For the l1-norm measure this is related to a natural "coherence radius" for the state and leads to a conjecture for an l2-norm measure of coherence. For relative entropy the average coherence is determined by the difference between the von Neumann entropy and the quantum subentropy of the state and leads to upper bounds for the latter quantity. Finally, we point out that the relative entropy of coherence is a special case of G-asymmetry, which immediately yields several operational interpretations in contexts as diverse as frame alignment, quantum communication, and metrology, and suggests generalizing the property of quantum coherence to arbitrary groups of physical transformations.
The Kantian framework of complementarity
NASA Astrophysics Data System (ADS)
Cuffaro, Michael
A growing number of commentators have, in recent years, noted the important affinities in the views of Immanuel Kant and Niels Bohr. While these commentators are correct, the picture they present of the connections between Bohr and Kant is painted in broad strokes; it is open to the criticism that these affinities are merely superficial. In this essay, I provide a closer, structural, analysis of both Bohr's and Kant's views that makes these connections more explicit. In particular, I demonstrate the similarities between Bohr's argument, on the one hand, that neither the wave nor the particle description of atomic phenomena pick out an object in the ordinary sense of the word, and Kant's requirement, on the other hand, that both 'mathematical' (having to do with magnitude) and 'dynamical' (having to do with an object's interaction with other objects) principles must be applicable to appearances in order for us to determine them as objects of experience. I argue that Bohr's 'complementarity interpretation' of quantum mechanics, which views atomic objects as idealizations, and which licenses the repeal of the principle of causality for the domain of atomic physics, is perfectly compatible with, and indeed follows naturally from a broadly Kantian epistemological framework.
Acceleration of multiple solution of a boundary value problem involving a linear algebraic system
NASA Astrophysics Data System (ADS)
Gazizov, Talgat R.; Kuksenko, Sergey P.; Surovtsev, Roman S.
2016-06-01
Multiple solution of a boundary value problem that involves a linear algebraic system is considered. New approach to acceleration of the solution is proposed. The approach uses the structure of the linear system matrix. Particularly, location of entries in the right columns and low rows of the matrix, which undergo variation due to the computing in the range of parameters, is used to apply block LU decomposition. Application of the approach is considered on the example of multiple computing of the capacitance matrix by method of moments used in numerical electromagnetics. Expressions for analytic estimation of the acceleration are presented. Results of the numerical experiments for solution of 100 linear systems with matrix orders of 1000, 2000, 3000 and different relations of variated and constant entries of the matrix show that block LU decomposition can be effective for multiple solution of linear systems. The speed up compared to pointwise LU factorization increases (up to 15) for larger number and order of considered systems with lower number of variated entries.
On Linear Instability and Stability of the Rayleigh-Taylor Problem in Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Jiang, Fei; Jiang, Song
2015-12-01
We investigate the stabilizing effects of the magnetic fields in the linearized magnetic Rayleigh-Taylor (RT) problem of a nonhomogeneous incompressible viscous magnetohydrodynamic fluid of zero resistivity in the presence of a uniform gravitational field in a three-dimensional bounded domain, in which the velocity of the fluid is non-slip on the boundary. By adapting a modified variational method and careful deriving a priori estimates, we establish a criterion for the instability/stability of the linearized problem around a magnetic RT equilibrium state. In the criterion, we find a new phenomenon that a sufficiently strong horizontal magnetic field has the same stabilizing effect as that of the vertical magnetic field on growth of the magnetic RT instability. In addition, we further study the corresponding compressible case, i.e., the Parker (or magnetic buoyancy) problem, for which the strength of a horizontal magnetic field decreases with height, and also show the stabilizing effect of a sufficiently large magnetic field.
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M; Derocher, Andrew E; Lewis, Mark A; Jonsen, Ian D; Mills Flemming, Joanna
2016-01-01
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results. PMID:27220686
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M; Derocher, Andrew E; Lewis, Mark A; Jonsen, Ian D; Mills Flemming, Joanna
2016-05-25
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna
2016-01-01
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results. PMID:27220686
Variable-permittivity linear inverse problem for the H(sub z)-polarized case
NASA Technical Reports Server (NTRS)
Moghaddam, M.; Chew, W. C.
1993-01-01
The H(sub z)-polarized inverse problem has rarely been studied before due to the complicated way in which the unknown permittivity appears in the wave equation. This problem is equivalent to the acoustic inverse problem with variable density. We have recently reported the solution to the nonlinear variable-permittivity H(sub z)-polarized inverse problem using the Born iterative method. Here, the linear inverse problem is solved for permittivity (epsilon) and permeability (mu) using a different approach which is an extension of the basic ideas of diffraction tomography (DT). The key to solving this problem is to utilize frequency diversity to obtain the required independent measurements. The receivers are assumed to be in the far field of the object, and plane wave incidence is also assumed. It is assumed that the scatterer is weak, so that the Born approximation can be used to arrive at a relationship between the measured pressure field and two terms related to the spatial Fourier transform of the two unknowns, epsilon and mu. The term involving permeability corresponds to monopole scattering and that for permittivity to dipole scattering. Measurements at several frequencies are used and a least squares problem is solved to reconstruct epsilon and mu. It is observed that the low spatial frequencies in the spectra of epsilon and mu produce inaccuracies in the results. Hence, a regularization method is devised to remove this problem. Several results are shown. Low contrast objects for which the above analysis holds are used to show that good reconstructions are obtained for both permittivity and permeability after regularization is applied.
A Vector Study of Linearized Supersonic Flow Applications to Nonplanar Problems
NASA Technical Reports Server (NTRS)
Martin, John C
1953-01-01
A vector study of the partial-differential equation of steady linearized supersonic flow is presented. General expressions which relate the velocity potential in the stream to the conditions on the disturbing surfaces, are derived. In connection with these general expressions the concept of the finite part of an integral is discussed. A discussion of problems dealing with planar bodies is given and the conditions for the solution to be unique are investigated. Problems concerning nonplanar systems are investigated, and methods are derived for the solution of some simple nonplanar bodies. The surface pressure distribution and the damping in roll are found for rolling tails consisting of four, six, and eight rectangular fins for the Mach number range where the region of interference between adjacent fins does not affect the fin tips.
LINPRO: Linear inverse problem library for data contaminated by statistical noise
NASA Astrophysics Data System (ADS)
Magierski, Piotr; Wlazłowski, Gabriel
2012-10-01
The library LINPRO which provides the solution to the linear inverse problem for data contaminated by a statistical noise is presented. The library makes use of two methods: Maximum Entropy Method and Singular Value Decomposition. As an example it has been applied to perform an analytic continuation of the imaginary time propagator obtained within the Quantum Monte Carlo method. Program summary Program title: LINPRO v1.0. Catalogue identifier: AEMT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: GNU Lesser General Public Licence. No. of lines in distributed program, including test data, etc.: 110620. No. of bytes in distributed program, including test data, etc.: 3208593. Distribution format: tar.gz. Programming language: C++. Computer: LINPRO library should compile on any computing system that has C++ compiler. Operating system: Linux or Unix. Classification: 4.9, 4.12, 4.13. External routines: OPT++: An Object-Oriented Nonlinear Optimization Library [1] (included in the distribution). Nature of problem: LINPRO library solves linear inverse problems with an arbitrary kernel and arbitrary external constraints imposed on the solution. Solution method: LINPRO library implements two complementary methods: Maximum Entropy Method and SVD method. Additional comments: Tested with compilers-GNU Compiler g++, Intel Compiler icpc. Running time: Problem dependent, ranging from seconds to hours. Each of the examples takes less than a minute to run. References: [1] OPT++: An Object-Oriented Nonlinear Optimization Library, https://software.sandia.gov/opt++/.
On linear acoustic solutions of high speed helicopter impulsive noise problems
NASA Astrophysics Data System (ADS)
Tam, C. K. W.
1983-07-01
The nature of linear acoustic solutions for a helicopter rotor blade with a blunt leading edge operating at high transonic tip Mach number is studied. As a part of this investigation a very efficient computation procedure for helicopter rotor blade thickness noise according to linear theory is developed. Numerical and analytical results reveal that as the blade tip Mach number approaches unity, the solution develops singularities and a radiating discontinuity. It is shown that these characteristic features are caused by the contributions of the higher harmonics which decrease in magnitude only as n exp-1/2 in the limit n tending to infinity. These higher harmonics are generated by the blunt leading edge. The far field wave form at sonic tip Mach number for a blade with a NACA 0012 airfoil section has a singularity of the inverse root type at its front and a logarithmic singularity near its end. Thus caution must be exercised in applying linear acoustic theory to high speed helicopter impulsive noise problems.
Linear stability of the Couette flow of a vibrationally excited gas. 2. viscous problem
NASA Astrophysics Data System (ADS)
Grigor'ev, Yu. N.; Ershov, I. V.
2016-03-01
Based on the linear theory, stability of viscous disturbances in a supersonic plane Couette flow of a vibrationally excited gas described by a system of linearized equations of two-temperature gas dynamics including shear and bulk viscosity is studied. It is demonstrated that two sets are identified in the spectrum of the problem of stability of plane waves, similar to the case of a perfect gas. One set consists of viscous acoustic modes, which asymptotically converge to even and odd inviscid acoustic modes at high Reynolds numbers. The eigenvalues from the other set have no asymptotic relationship with the inviscid problem and are characterized by large damping decrements. Two most unstable viscous acoustic modes (I and II) are identified; the limits of these modes were considered previously in the inviscid approximation. It is shown that there are domains in the space of parameters for both modes, where the presence of viscosity induces appreciable destabilization of the flow. Moreover, the growth rates of disturbances are appreciably greater than the corresponding values for the inviscid flow, while thermal excitation in the entire considered range of parameters increases the stability of the viscous flow. For a vibrationally excited gas, the critical Reynolds number as a function of the thermal nonequilibrium degree is found to be greater by 12% than for a perfect gas.
ERIC Educational Resources Information Center
Nistal, Ana Acevedo; Van Dooren, Wim; Verschaffel, Lieven
2012-01-01
This study evaluated students' representational choices while they solved linear function problems. Eighty-six secondary-school students solved problems under one choice condition, where they chose a table, a formula, or both to solve each problem, and two no-choice conditions, where one of these representations was forced upon them. Two…
NASA Astrophysics Data System (ADS)
Zhou, Qinglong; Long, Yiming
2016-10-01
In this paper, we consider the elliptic collinear solutions of the classical n-body problem, where the n bodies always stay on a straight line, and each of them moves on its own elliptic orbit with the same eccentricity. Such a motion is called an elliptic Euler-Moulton collinear solution. Here we prove that the corresponding linearized Hamiltonian system at such an elliptic Euler-Moulton collinear solution of n-bodies splits into (n-1) independent linear Hamiltonian systems, the first one is the linearized Hamiltonian system of the Kepler 2-body problem at Kepler elliptic orbit, and each of the other (n-2) systems is the essential part of the linearized Hamiltonian system at an elliptic Euler collinear solution of a 3-body problem whose mass parameter is modified. Then the linear stability of such a solution in the n-body problem is reduced to those of the corresponding elliptic Euler collinear solutions of the 3-body problems, which for example then can be further understood using numerical results of Martínez et al. on 3-body Euler solutions in 2004-2006. As an example, we carry out the detailed derivation of the linear stability for an elliptic Euler-Moulton solution of the 4-body problem with two small masses in the middle.
An improved exploratory search technique for pure integer linear programming problems
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1990-01-01
The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken. PMID:27301006
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
NASA Astrophysics Data System (ADS)
Dotti, Gustavo; Gleiser, Reinaldo J.
2009-11-01
The coupled equations for the scalar modes of the linearized Einstein equations around Schwarzschild's spacetime were reduced by Zerilli to a (1+1) wave equation \\partial ^2 \\Psi _z / \\partial t^2 + {\\cal H} \\Psi _z =0 , where {\\cal H} = -\\partial ^2 / \\partial x^2 + V(x) is the Zerilli 'Hamiltonian' and x is the tortoise radial coordinate. From its definition, for smooth metric perturbations the field Ψz is singular at rs = -6M/(ell - 1)(ell +2), with ell being the mode harmonic number. The equation Ψz obeys is also singular, since V has a second-order pole at rs. This is irrelevant to the black hole exterior stability problem, where r > 2M > 0, and rs < 0, but it introduces a non-trivial problem in the naked singular case where M < 0, then rs > 0, and the singularity appears in the relevant range of r (0 < r < ∞). We solve this problem by developing a new approach to the evolution of the even mode, based on a new gauge invariant function, \\hat{\\Psi} , that is a regular function of the metric perturbation for any value of M. The relation of \\hat{\\Psi} to Ψz is provided by an intertwiner operator. The spatial pieces of the (1 + 1) wave equations that \\hat{\\Psi} and Ψz obey are related as a supersymmetric pair of quantum Hamiltonians {\\cal H} and \\hat{\\cal H} . For M<0, \\hat{\\cal H} has a regular potential and a unique self-adjoint extension in a domain {\\cal D} defined by a physically motivated boundary condition at r = 0. This allows us to address the issue of evolution of gravitational perturbations in this non-globally hyperbolic background. This formulation is used to complete the proof of the linear instability of the Schwarzschild naked singularity, by showing that a previously found unstable mode belongs to a complete basis of \\hat{\\cal H} in {\\cal D} , and thus is excitable by generic initial data. This is further illustrated by numerically solving the linearized equations for suitably chosen initial data.
NASA Astrophysics Data System (ADS)
Shaldanbayev, Amir; Shomanbayeva, Manat; Kopzhassarova, Asylzat
2016-08-01
This paper proposes a fundamentally new method of investigation of a singularly perturbed Cauchy problem for a linear system of ordinary differential equations based on the spectral theory of equations with deviating argument.
Madrigal-González, Jaime; Ruiz-Benito, Paloma; Ratcliffe, Sophia; Calatayud, Joaquín; Kändler, Gerald; Lehtonen, Aleksi; Dahlgren, Jonas; Wirth, Christian; Zavala, Miguel A.
2016-01-01
Neglecting tree size and stand structure dynamics might bias the interpretation of the diversity-productivity relationship in forests. Here we show evidence that complementarity is contingent on tree size across large-scale climatic gradients in Europe. We compiled growth data of the 14 most dominant tree species in 32,628 permanent plots covering boreal, temperate and Mediterranean forest biomes. Niche complementarity is expected to result in significant growth increments of trees surrounded by a larger proportion of functionally dissimilar neighbours. Functional dissimilarity at the tree level was assessed using four functional types: i.e. broad-leaved deciduous, broad-leaved evergreen, needle-leaved deciduous and needle-leaved evergreen. Using Linear Mixed Models we show that, complementarity effects depend on tree size along an energy availability gradient across Europe. Specifically: (i) complementarity effects at low and intermediate positions of the gradient (coldest-temperate areas) were stronger for small than for large trees; (ii) in contrast, at the upper end of the gradient (warmer regions), complementarity is more widespread in larger than smaller trees, which in turn showed negative growth responses to increased functional dissimilarity. Our findings suggest that the outcome of species mixing on stand productivity might critically depend on individual size distribution structure along gradients of environmental variation. PMID:27571971
Madrigal-González, Jaime; Ruiz-Benito, Paloma; Ratcliffe, Sophia; Calatayud, Joaquín; Kändler, Gerald; Lehtonen, Aleksi; Dahlgren, Jonas; Wirth, Christian; Zavala, Miguel A
2016-01-01
Neglecting tree size and stand structure dynamics might bias the interpretation of the diversity-productivity relationship in forests. Here we show evidence that complementarity is contingent on tree size across large-scale climatic gradients in Europe. We compiled growth data of the 14 most dominant tree species in 32,628 permanent plots covering boreal, temperate and Mediterranean forest biomes. Niche complementarity is expected to result in significant growth increments of trees surrounded by a larger proportion of functionally dissimilar neighbours. Functional dissimilarity at the tree level was assessed using four functional types: i.e. broad-leaved deciduous, broad-leaved evergreen, needle-leaved deciduous and needle-leaved evergreen. Using Linear Mixed Models we show that, complementarity effects depend on tree size along an energy availability gradient across Europe. Specifically: (i) complementarity effects at low and intermediate positions of the gradient (coldest-temperate areas) were stronger for small than for large trees; (ii) in contrast, at the upper end of the gradient (warmer regions), complementarity is more widespread in larger than smaller trees, which in turn showed negative growth responses to increased functional dissimilarity. Our findings suggest that the outcome of species mixing on stand productivity might critically depend on individual size distribution structure along gradients of environmental variation.
NASA Astrophysics Data System (ADS)
Madrigal-González, Jaime; Ruiz-Benito, Paloma; Ratcliffe, Sophia; Calatayud, Joaquín; Kändler, Gerald; Lehtonen, Aleksi; Dahlgren, Jonas; Wirth, Christian; Zavala, Miguel A.
2016-08-01
Neglecting tree size and stand structure dynamics might bias the interpretation of the diversity-productivity relationship in forests. Here we show evidence that complementarity is contingent on tree size across large-scale climatic gradients in Europe. We compiled growth data of the 14 most dominant tree species in 32,628 permanent plots covering boreal, temperate and Mediterranean forest biomes. Niche complementarity is expected to result in significant growth increments of trees surrounded by a larger proportion of functionally dissimilar neighbours. Functional dissimilarity at the tree level was assessed using four functional types: i.e. broad-leaved deciduous, broad-leaved evergreen, needle-leaved deciduous and needle-leaved evergreen. Using Linear Mixed Models we show that, complementarity effects depend on tree size along an energy availability gradient across Europe. Specifically: (i) complementarity effects at low and intermediate positions of the gradient (coldest-temperate areas) were stronger for small than for large trees; (ii) in contrast, at the upper end of the gradient (warmer regions), complementarity is more widespread in larger than smaller trees, which in turn showed negative growth responses to increased functional dissimilarity. Our findings suggest that the outcome of species mixing on stand productivity might critically depend on individual size distribution structure along gradients of environmental variation.
Madrigal-González, Jaime; Ruiz-Benito, Paloma; Ratcliffe, Sophia; Calatayud, Joaquín; Kändler, Gerald; Lehtonen, Aleksi; Dahlgren, Jonas; Wirth, Christian; Zavala, Miguel A
2016-01-01
Neglecting tree size and stand structure dynamics might bias the interpretation of the diversity-productivity relationship in forests. Here we show evidence that complementarity is contingent on tree size across large-scale climatic gradients in Europe. We compiled growth data of the 14 most dominant tree species in 32,628 permanent plots covering boreal, temperate and Mediterranean forest biomes. Niche complementarity is expected to result in significant growth increments of trees surrounded by a larger proportion of functionally dissimilar neighbours. Functional dissimilarity at the tree level was assessed using four functional types: i.e. broad-leaved deciduous, broad-leaved evergreen, needle-leaved deciduous and needle-leaved evergreen. Using Linear Mixed Models we show that, complementarity effects depend on tree size along an energy availability gradient across Europe. Specifically: (i) complementarity effects at low and intermediate positions of the gradient (coldest-temperate areas) were stronger for small than for large trees; (ii) in contrast, at the upper end of the gradient (warmer regions), complementarity is more widespread in larger than smaller trees, which in turn showed negative growth responses to increased functional dissimilarity. Our findings suggest that the outcome of species mixing on stand productivity might critically depend on individual size distribution structure along gradients of environmental variation. PMID:27571971
A review of vector convergence acceleration methods, with applications to linear algebra problems
NASA Astrophysics Data System (ADS)
Brezinski, C.; Redivo-Zaglia, M.
In this article, in a few pages, we will try to give an idea of convergence acceleration methods and extrapolation procedures for vector sequences, and to present some applications to linear algebra problems and to the treatment of the Gibbs phenomenon for Fourier series in order to show their effectiveness. The interested reader is referred to the literature for more details. In the bibliography, due to space limitation, we will only give the more recent items, and, for older ones, we refer to Brezinski and Redivo-Zaglia, Extrapolation methods. (Extrapolation Methods. Theory and Practice, North-Holland, 1991). This book also contains, on a magnetic support, a library (in Fortran 77 language) for convergence acceleration algorithms and extrapolation methods.
Solving the Linear Balance Equation on the Globe as a Generalized Inverse Problem
NASA Technical Reports Server (NTRS)
Lu, Huei-Iin; Robertson, Franklin R.
1999-01-01
A generalized (pseudo) inverse technique was developed to facilitate a better understanding of the numerical effects of tropical singularities inherent in the spectral linear balance equation (LBE). Depending upon the truncation, various levels of determinancy are manifest. The traditional fully-determined (FD) systems give rise to a strong response, while the under-determined (UD) systems yield a weak response to the tropical singularities. The over-determined (OD) systems result in a modest response and a large residual in the tropics. The FD and OD systems can be alternatively solved by the iterative method. Differences in the solutions of an UD system exist between the inverse technique and the iterative method owing to the non- uniqueness of the problem. A realistic balanced wind was obtained by solving the principal components of the spectral LBE in terms of vorticity in an intermediate resolution. Improved solutions were achieved by including the singular-component solutions which best fit the observed wind data.
First-order system least squares for the pure traction problem in planar linear elasticity
Cai, Z.; Manteuffel, T.; McCormick, S.; Parter, S.
1996-12-31
This talk will develop two first-order system least squares (FOSLS) approaches for the solution of the pure traction problem in planar linear elasticity. Both are two-stage algorithms that first solve for the gradients of displacement, then for the displacement itself. One approach, which uses L{sup 2} norms to define the FOSLS functional, is shown under certain H{sup 2} regularity assumptions to admit optimal H{sup 1}-like performance for standard finite element discretization and standard multigrid solution methods that is uniform in the Poisson ratio for all variables. The second approach, which is based on H{sup -1} norms, is shown under general assumptions to admit optimal uniform performance for displacement flux in an L{sup 2} norm and for displacement in an H{sup 1} norm. These methods do not degrade as other methods generally do when the material properties approach the incompressible limit.
Self-complementarity of messenger RNA's of periodic proteins
NASA Technical Reports Server (NTRS)
Ycas, M.
1973-01-01
It is shown that the mRNA's of three periodic proteins, collagen, keratin and freezing point depressing glycoproteins show a marked degree of self-complementarity. The possible origin of this self-complementarity is discussed.
NASA Technical Reports Server (NTRS)
Antoniewicz, Robert F.; Duke, Eugene L.; Menon, P. K. A.
1991-01-01
The design of nonlinear controllers has relied on the use of detailed aerodynamic and engine models that must be associated with the control law in the flight system implementation. Many of these controllers were applied to vehicle flight path control problems and have attempted to combine both inner- and outer-loop control functions in a single controller. An approach to the nonlinear trajectory control problem is presented. This approach uses linearizing transformations with measurement feedback to eliminate the need for detailed aircraft models in outer-loop control applications. By applying this approach and separating the inner-loop and outer-loop functions two things were achieved: (1) the need for incorporating detailed aerodynamic models in the controller is obviated; and (2) the controller is more easily incorporated into existing aircraft flight control systems. An implementation of the controller is discussed, and this controller is tested on a six degree-of-freedom F-15 simulation and in flight on an F-15 aircraft. Simulation data are presented which validates this approach over a large portion of the F-15 flight envelope. Proof of this concept is provided by flight-test data that closely matches simulation results. Flight-test data are also presented.
Interval analysis approach to rank determination in linear least squares problems
Manteuffel, T.A.
1980-06-01
The linear least-squares problem Ax approx. = b has a unique solution only if the matrix A has full column rank. Numerical rank determination is difficult, especially in the presence of uncertainties in the elements of A. This paper proposes an interval analysis approach. A set of matrices A/sup I/ is defined that contains all possible perturbations of A due to uncertainties; A/sup I/ is said to be rank deficient if any member of A/sup I/ is rank deficient. A modification to the Q-R decomposition method of solution of the least-squares problem allows a determination of the rank of A/sup I/ and a partial interval analysis of the solution vector x. This procedure requires the computation of R/sup -1/. Another modification is proposed which determines the rank of A/sup I/ without computing R/sup -1/. The additional computational effort is O(N/sup 2/), where N is the column dimension of A. 4 figures.
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
A stabilized complementarity formulation for nonlinear analysis of 3D bimodular materials
NASA Astrophysics Data System (ADS)
Zhang, L.; Zhang, H. W.; Wu, J.; Yan, B.
2016-06-01
Bi-modulus materials with different mechanical responses in tension and compression are often found in civil, composite, and biological engineering. Numerical analysis of bimodular materials is strongly nonlinear and convergence is usually a problem for traditional iterative schemes. This paper aims to develop a stabilized computational method for nonlinear analysis of 3D bimodular materials. Based on the parametric variational principle, a unified constitutive equation of 3D bimodular materials is proposed, which allows the eight principal stress states to be indicated by three parametric variables introduced in the principal stress directions. The original problem is transformed into a standard linear complementarity problem (LCP) by the parametric virtual work principle and a quadratic programming algorithm is developed by solving the LCP with the classic Lemke's algorithm. Update of elasticity and stiffness matrices is avoided and, thus, the proposed algorithm shows an excellent convergence behavior compared with traditional iterative schemes. Numerical examples show that the proposed method is valid and can accurately analyze mechanical responses of 3D bimodular materials. Also, stability of the algorithm is greatly improved.
NASA Astrophysics Data System (ADS)
Zhou, Qinglong; Long, Yiming
2015-06-01
In this paper, we prove that the linearized system of elliptic triangle homographic solution of planar charged three-body problem can be transformed to that of the elliptic equilateral triangle solution of the planar classical three-body problem. Consequently, the results of Martínez, Samà and Simó (2006) [15] and results of Hu, Long and Sun (2014) [6] can be applied to these solutions of the charged three-body problem to get their linear stability.
Abgrall, Rémi; Congedo, Pietro Marco
2013-02-15
This paper deals with the formulation of a semi-intrusive (SI) method allowing the computation of statistics of linear and non linear PDEs solutions. This method shows to be very efficient to deal with probability density function of whatsoever form, long-term integration and discontinuities in stochastic space. Given a stochastic PDE where randomness is defined on Ω, starting from (i) a description of the solution in term of a space variables, (ii) a numerical scheme defined for any event ω∈Ω and (iii) a (family) of random variables that may be correlated, the solution is numerically described by its conditional expectancies of point values or cell averages and its evaluation constructed from the deterministic scheme. One of the tools is a tessellation of the random space as in finite volume methods for the space variables. Then, using these conditional expectancies and the geometrical description of the tessellation, a piecewise polynomial approximation in the random variables is computed using a reconstruction method that is standard for high order finite volume space, except that the measure is no longer the standard Lebesgue measure but the probability measure. This reconstruction is then used to formulate a scheme on the numerical approximation of the solution from the deterministic scheme. This new approach is said semi-intrusive because it requires only a limited amount of modification in a deterministic solver to quantify uncertainty on the state when the solver includes uncertain variables. The effectiveness of this method is illustrated for a modified version of Kraichnan–Orszag three-mode problem where a discontinuous pdf is associated to the stochastic variable, and for a nozzle flow with shocks. The results have been analyzed in terms of accuracy and probability measure flexibility. Finally, the importance of the probabilistic reconstruction in the stochastic space is shown up on an example where the exact solution is computable, the viscous
Aksenov, V. L.; Kiselev, M. A.
2010-12-15
General problems of the complementarity of different physical methods and specific features of the interaction between neutron and matter and neutron diffraction with respect to the time of flight are discussed. The results of studying the kinetics of structural changes in lipid membranes under hydration and self-assembly of the lipid bilayer in the presence of a detergent are reported. The possibilities of the complementarity of neutron diffraction and X-ray synchrotron radiation and developing a free-electron laser are noted.
Crane, R L; Garbow, B S; Hillstrom, K E; Minkoff, M
1980-11-01
This report describes the implementation of an algorithm of Stoer and Schittkowski for solving linearly constrained linear least-squares problems. These problems arise in many areas, particularly in data fitting where a model is provided and parameters in the model are selected to be a best least-squares fit to known experimental observations. By adding constraints to the least-squares fit, one can force user-specified properties on the parameters selected. The algorithm used applies a numerically stable implementation of the Gram-Schmidt orthogonalization procedure to deal with a factorization approach for solving the constrained least-squares problem. The software developed allows for either a user-supplied feasible starting point or the automatic generation of a feasible starting point, redecomposition after solving the problem to improve numerical accuracy, and diagnostic printout to follow the computations in the algorithm. In addition to a description of the actual method used to solve the problem, a description of the software structure and the user interfaces is provided, along with a numerical example. 3 figures, 1 table.
Sensor Placement by Maximal Projection on Minimum Eigenspace for Linear Inverse Problems
NASA Astrophysics Data System (ADS)
Jiang, Chaoyang; Soh, Yeng Chai; Li, Hua
2016-11-01
This paper presents two new greedy sensor placement algorithms, named minimum nonzero eigenvalue pursuit (MNEP) and maximal projection on minimum eigenspace (MPME), for linear inverse problems, with greater emphasis on the MPME algorithm for performance comparison with existing approaches. We select the sensing locations one-by-one. In this way, the least number of required sensors can be determined by checking whether the estimation accuracy is satisfied after each sensing location is determined. The minimum eigenspace is defined as the eigenspace associated with the minimum eigenvalue of the dual observation matrix. For each sensing location, the projection of its observation vector onto the minimum eigenspace is shown to be monotonically decreasing w.r.t. the worst case error variance (WCEV) of the estimated parameters. We select the sensing location whose observation vector has the maximum projection onto the minimum eigenspace of the current dual observation matrix. The proposed MPME is shown to be one of the most computationally efficient algorithms. Our Monte-Carlo simulations showed that MPME outperforms the convex relaxation method [1], the SparSenSe method [2], and the FrameSense method [3] in terms of WCEV and the mean square error (MSE) of the estimated parameters, especially when the number of available sensor nodes is very limited.
Evaluation of parallel direct sparse linear solvers in electromagnetic geophysical problems
NASA Astrophysics Data System (ADS)
Puzyrev, Vladimir; Koric, Seid; Wilkin, Scott
2016-04-01
High performance computing is absolutely necessary for large-scale geophysical simulations. In order to obtain a realistic image of a geologically complex area, industrial surveys collect vast amounts of data making the computational cost extremely high for the subsequent simulations. A major computational bottleneck of modeling and inversion algorithms is solving the large sparse systems of linear ill-conditioned equations in complex domains with multiple right hand sides. Recently, parallel direct solvers have been successfully applied to multi-source seismic and electromagnetic problems. These methods are robust and exhibit good performance, but often require large amounts of memory and have limited scalability. In this paper, we evaluate modern direct solvers on large-scale modeling examples that previously were considered unachievable with these methods. Performance and scalability tests utilizing up to 65,536 cores on the Blue Waters supercomputer clearly illustrate the robustness, efficiency and competitiveness of direct solvers compared to iterative techniques. Wide use of direct methods utilizing modern parallel architectures will allow modeling tools to accurately support multi-source surveys and 3D data acquisition geometries, thus promoting a more efficient use of the electromagnetic methods in geophysics.
Taming the non-linearity problem in GPR full-waveform inversion for high contrast media
NASA Astrophysics Data System (ADS)
Meles, Giovanni; Greenhalgh, Stewart; van der Kruk, Jan; Green, Alan; Maurer, Hansruedi
2012-03-01
We present a new algorithm for the inversion of full-waveform ground-penetrating radar (GPR) data. It is designed to tame the non-linearity issue that afflicts inverse scattering problems, especially in high contrast media. We first investigate the limitations of current full-waveform time-domain inversion schemes for GPR data and then introduce a much-improved approach based on a combined frequency-time-domain analysis. We show by means of several synthetic tests and theoretical considerations that local minima trapping (common in full bandwidth time-domain inversion) can be avoided by starting the inversion with only the low frequency content of the data. Resolution associated with the high frequencies can then be achieved by progressively expanding to wider bandwidths as the iterations proceed. Although based on a frequency analysis of the data, the new method is entirely implemented by means of a time-domain forward solver, thus combining the benefits of both frequency-domain (low frequency inversion conveys stability and avoids convergence to a local minimum; whereas high frequency inversion conveys resolution) and time-domain methods (simplicity of interpretation and recognition of events; ready availability of FDTD simulation tools).
A Mixed Integer Linear Program for Solving a Multiple Route Taxi Scheduling Problem
NASA Technical Reports Server (NTRS)
Montoya, Justin Vincent; Wood, Zachary Paul; Rathinam, Sivakumar; Malik, Waqar Ahmad
2010-01-01
Aircraft movements on taxiways at busy airports often create bottlenecks. This paper introduces a mixed integer linear program to solve a Multiple Route Aircraft Taxi Scheduling Problem. The outputs of the model are in the form of optimal taxi schedules, which include routing decisions for taxiing aircraft. The model extends an existing single route formulation to include routing decisions. An efficient comparison framework compares the multi-route formulation and the single route formulation. The multi-route model is exercised for east side airport surface traffic at Dallas/Fort Worth International Airport to determine if any arrival taxi time savings can be achieved by allowing arrivals to have two taxi routes: a route that crosses an active departure runway and a perimeter route that avoids the crossing. Results indicate that the multi-route formulation yields reduced arrival taxi times over the single route formulation only when a perimeter taxiway is used. In conditions where the departure aircraft are given an optimal and fixed takeoff sequence, accumulative arrival taxi time savings in the multi-route formulation can be as high as 3.6 hours more than the single route formulation. If the departure sequence is not optimal, the multi-route formulation results in less taxi time savings made over the single route formulation, but the average arrival taxi time is significantly decreased.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
1998-01-01
Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.
Low energy description of quantum gravity and complementarity
NASA Astrophysics Data System (ADS)
Nomura, Yasunori; Varela, Jaime; Weinberg, Sean J.
2014-06-01
We consider a framework in which low energy dynamics of quantum gravity is described preserving locality, and yet taking into account the effects that are not captured by the naive global spacetime picture, e.g. those associated with black hole complementarity. Our framework employs a "special relativistic" description of gravity; specifically, gravity is treated as a force measured by the observer tied to the coordinate system associated with a freely falling local Lorentz frame. We identify, in simple cases, regions of spacetime in which low energy local descriptions are applicable as viewed from the freely falling frame; in particular, we identify a surface called the gravitational observer horizon on which the local proper acceleration measured in the observer's coordinates becomes the cutoff (string) scale. This allows for separating between the "low-energy" local physics and "trans-Planckian" intrinsically quantum gravitational (stringy) physics, and allows for developing physical pictures of the origins of various effects. We explore the structure of the Hilbert space in which the proposed scheme is realized in a simple manner, and classify its elements according to certain horizons they possess. We also discuss implications of our framework on the firewall problem. We conjecture that the complementarity picture may persist due to properties of trans-Planckian physics.
Skill complementarity enhances heterophily in collaboration networks
NASA Astrophysics Data System (ADS)
Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene
2016-01-01
Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems.
Skill complementarity enhances heterophily in collaboration networks
Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene
2016-01-01
Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems. PMID:26743687
On the complementarity of electroencephalography and magnetoencephalography
NASA Astrophysics Data System (ADS)
Dassios, G.; Fokas, A. S.; Hadjiloizi, D.
2007-12-01
We show that for the spherical model of the brain, the part of the neuronal current that generates the electric potential (and therefore the electric field) lives in the orthogonal complement of the part of the current that generates the magnetic potential (and therefore the magnetic induction field). This means that for a continuously distributed neuronal current, information missing in the electroencephalographic data is precisely information that is available in the magnetoencephalographic data, and vice versa. In this way, the notion of complementarity between the imaging techniques of electroencephalography and magnetoencephalography is mathematically defined. Using this notion of complementarity and expanding the neuronal current in terms of vector spherical harmonics, which by definition provide the angular dependence of the current, we show that if the electric and the magnetic potentials in the exterior of the head are given, then we can determine certain moments of the functions which provide the radial dependence of the neuronal current. In addition to the above notion of complementarity, we also present a notion of unification of electroencephalography and magnetoencephalography by showing that they are governed respectively by the scalar and the vector invariants of a unified dyadic field describing electromagnetoencephalography.
Meshkov, E.E.; Mokhov, V.N.
1983-01-01
Stability problems and the development of small perturbations in gasdynamics are ordinarily investigated by using the solution of linearized equations. The applicability of the linear approximation is usually determined by the smallness of the perturbation. However, a linear approximation turns out to be false in a number of cases. The authors consider a plane problem in which a characteristic surface curved along a sinusoid moves over a substance at a constant velocity. In this case, the change in surface shape with time is determined by the Huygens principle. Also considered is the one-dimensional flow of an ideal gas with adiabatic index ..nu.. in which there is a small sinusoidal perturbation at the initial time. These examples are encountered locally in the majority of problems on flow stability in gasdynamics. It is shown that the shape of the reflected wave front deviates from the sinusoidal with time and the formation of singularities and the deviation from the sinusoidal shape slow down as the amplitude of the initial perturbation diminishes. The authors conclude that utilization of a linearized approximation to solve the gasdynamics equations is possible only up to a certain time. Consequently, application of asymptotic formulas obtained on the basis of a linear approximation for a finite magnitude of the perbation requires an additional foundation.
Horizons of description: Black holes and complementarity
NASA Astrophysics Data System (ADS)
Bokulich, Peter Joshua Martin
Niels Bohr famously argued that a consistent understanding of quantum mechanics requires a new epistemic framework, which he named complementarity . This position asserts that even in the context of quantum theory, classical concepts must be used to understand and communicate measurement results. The apparent conflict between certain classical descriptions is avoided by recognizing that their application now crucially depends on the measurement context. Recently it has been argued that a new form of complementarity can provide a solution to the so-called information loss paradox. Stephen Hawking argues that the evolution of black holes cannot be described by standard unitary quantum evolution, because such evolution always preserves information, while the evaporation of a black hole will imply that any information that fell into it is irrevocably lost---hence a "paradox." Some researchers in quantum gravity have argued that this paradox can be resolved if one interprets certain seemingly incompatible descriptions of events around black holes as instead being complementary. In this dissertation I assess the extent to which this black hole complementarity can be undergirded by Bohr's account of the limitations of classical concepts. I begin by offering an interpretation of Bohr's complementarity and the role that it plays in his philosophy of quantum theory. After clarifying the nature of classical concepts, I offer an account of the limitations these concepts face, and argue that Bohr's appeal to disturbance is best understood as referring to these conceptual limits. Following preparatory chapters on issues in quantum field theory and black hole mechanics, I offer an analysis of the information loss paradox and various responses to it. I consider the three most prominent accounts of black hole complementarity and argue that they fail to offer sufficient justification for the proposed incompatibility between descriptions. The lesson that emerges from this
NASA Astrophysics Data System (ADS)
Jain, Ruchika; Sinha, Deepa
2014-09-01
The non-linear stability of L 4 in the restricted three-body problem when both primaries are finite straight segments in the presence of third and fourth order resonances has been investigated. Markeev's theorem (Markeev in Libration Points in Celestial Mechanics and Astrodynamics, 1978) is used to examine the non-linear stability for the resonance cases 2:1 and 3:1. It is found that the non-linear stability of L 4 depends on the lengths of the segments in both resonance cases. It is also found that the range of stability increases when compared with the classical restricted problem. The results have been applied in the following asteroids systems: (i) 216 Kleopatra-951 Gaspara, (ii) 9 Metis-433 Eros, (iii) 22 Kalliope-243 Ida.
NASA Astrophysics Data System (ADS)
Kelbert, A.; Schultz, A.; Egbert, G.
2006-12-01
We address the non-linear ill-posed inverse problem of reconstructing the global three-dimensional distribution of electrical conductivity in Earth's mantle. The authors have developed a numerical regularized least-squares inverse solution based on the non-linear conjugate gradients approach. We apply this methodology to the most current low-frequency global observatory data set by Fujii &Schultz (2002), that includes c- and d-responses. We obtain 4-8 layer models satisfying the data. We then describe the features common to all these models and discuss the resolution of our method.
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
Influence of geometrical parameters on the linear stability of a Bénard-Marangoni problem
NASA Astrophysics Data System (ADS)
Hoyas, S.; Fajardo, P.; Pérez-Quiles, M. J.
2016-04-01
A linear stability analysis of a thin liquid film flowing over a plate is performed. The analysis is performed in an annular domain when momentum diffusivity and thermal diffusivity are comparable (relatively low Prandtl number, Pr =1.2 ). The influence of the aspect ratio (Γ ) and gravity, through the Bond number (Bo ), in the linear stability of the flow are analyzed together. Two different regions in the Γ -Bo plane have been identified. In the first one the basic state presents a linear regime (in which the temperature gradient does not change sign with r ). In the second one, the flow presents a nonlinear regime, also called return flow. A great diversity of bifurcations have been found just by changing the domain depth d . The results obtained in this work are in agreement with some reported experiments, and give a deeper insight into the effect of physical parameters on bifurcations.
Influence of geometrical parameters on the linear stability of a Bénard-Marangoni problem.
Hoyas, S; Fajardo, P; Pérez-Quiles, M J
2016-04-01
A linear stability analysis of a thin liquid film flowing over a plate is performed. The analysis is performed in an annular domain when momentum diffusivity and thermal diffusivity are comparable (relatively low Prandtl number, Pr=1.2). The influence of the aspect ratio (Γ) and gravity, through the Bond number (Bo), in the linear stability of the flow are analyzed together. Two different regions in the Γ-Bo plane have been identified. In the first one the basic state presents a linear regime (in which the temperature gradient does not change sign with r). In the second one, the flow presents a nonlinear regime, also called return flow. A great diversity of bifurcations have been found just by changing the domain depth d. The results obtained in this work are in agreement with some reported experiments, and give a deeper insight into the effect of physical parameters on bifurcations. PMID:27176388
Kinematics and tribological problems of linear guidance systems in four contact points
NASA Astrophysics Data System (ADS)
Popescu, A.; Olaru, D.
2016-08-01
A procedure has been developed to determine both the value of the ball's angular velocity and the angular position of this velocity, according to the normal loads in a linear system with four contact points. The program is based on the variational analysis of the power losses in ball-races contacts. Based on this the two kinematics parameters of the ball (angular velocity and angular position) were determined, in a linear system type KUE 35 as function of the C/P ratio.
NASA Technical Reports Server (NTRS)
Sain, M. K.; Antsaklis, P. J.; Gejji, R. R.; Wyman, B. F.; Peczkowski, J. L.
1981-01-01
Zames (1981) has observed that there is, in general, no 'separation principle' to guarantee optimality of a division between control law design and filtering of plant uncertainty. Peczkowski and Sain (1978) have solved a model matching problem using transfer functions. Taking into consideration this investigation, Peczkowski et al. (1979) proposed the Total Synthesis Problem (TSP), wherein both the command/output-response and command/control-response are to be synthesized, subject to the plant constraint. The TSP concept can be subdivided into a Nominal Design Problem (NDP), which is not dependent upon specific controller structures, and a Feedback Synthesis Problem (FSP), which is. Gejji (1980) found that NDP was characterized in terms of the plant structural matrices and a single, 'good' transfer function matrix. Sain et al. (1981) have extended this NDP work. The present investigation is concerned with a study of FSP for the unity feedback case. NDP, together with feedback synthesis, is understood as a Total Synthesis Problem.
NASA Astrophysics Data System (ADS)
Payette, G. S.; Reddy, J. N.
2011-05-01
In this paper we examine the roles of minimization and linearization in the least-squares finite element formulations of nonlinear boundary-values problems. The least-squares principle is based upon the minimization of the least-squares functional constructed via the sum of the squares of appropriate norms of the residuals of the partial differential equations (in the present case we consider L2 norms). Since the least-squares method is independent of the discretization procedure and the solution scheme, the least-squares principle suggests that minimization should be performed prior to linearization, where linearization is employed in the context of either the Picard or Newton iterative solution procedures. However, in the least-squares finite element analysis of nonlinear boundary-value problems, it has become common practice in the literature to exchange the sequence of application of the minimization and linearization operations. The main purpose of this study is to provide a detailed assessment on how the finite element solution is affected when the order of application of these operators is interchanged. The assessment is performed mathematically, through an examination of the variational setting for the least-squares formulation of an abstract nonlinear boundary-value problem, and also computationally, through the numerical simulation of the least-squares finite element solutions of both a nonlinear form of the Poisson equation and also the incompressible Navier-Stokes equations. The assessment suggests that although the least-squares principle indicates that minimization should be performed prior to linearization, such an approach is often impractical and not necessary.
NASA Astrophysics Data System (ADS)
Noor-E-Alam, Md.; Doucette, John
2015-08-01
Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.
Zhuk, Sergiy
2013-10-15
In this paper we present Kalman duality principle for a class of linear Differential-Algebraic Equations (DAE) with arbitrary index and time-varying coefficients. We apply it to an ill-posed minimax control problem with DAE constraint and derive a corresponding dual control problem. It turns out that the dual problem is ill-posed as well and so classical optimality conditions are not applicable in the general case. We construct a minimizing sequence u-circumflex{sub {epsilon}} for the dual problem applying Tikhonov method. Finally we represent u-circumflex{sub {epsilon}} in the feedback form using Riccati equation on a subspace which corresponds to the differential part of the DAE.
NASA Astrophysics Data System (ADS)
Eghnam, Karam M.; Sheta, Alaa F.
2008-06-01
Development of accurate models is necessary in critical applications such as prediction. In this paper, a solution to the stock prediction problem of the Barents Sea capelin is introduced using Artificial Neural Network (ANN) and Multiple Linear model Regression (MLR) models. The Capelin stock in the Barents Sea is one of the largest in the world. It normally maintained a fishery with annual catches of up to 3 million tons. The Capelin stock problem has an impact in the fish stock development. The proposed prediction model was developed using an ANNs with their weights adapted using Genetic Algorithm (GA). The proposed model was compared to traditional linear model the MLR. The results showed that the ANN-GA model produced an overall accuracy of 21% better than the MLR model.
The Limits of Black Hole Complementarity
NASA Astrophysics Data System (ADS)
Susskind, Leonard
Black hole complementarity, as originally formulated in the 1990's by Preskill, 't Hooft, and myself is now being challenged by the Almheiri-Marolf-Polchinski-Sully firewall argument. The AMPS argument relies on an implicit assumption—the "proximity" postulate—which says that the interior of a black hole must be constructed from degrees of freedom that are physically near the black hole. The proximity postulate manifestly contradicts the idea that interior information is redundant with information in Hawking radiation, which is very far from the black hole. AMPS argue that a violation of the proximity postulate would lead to a contradiction in a thought-experiment in which Alice distills the Hawking radiation and brings a bit back to the black hole. According to AMPS the only way to protect against the contradiction is for a firewall to form at the Page time. But the measurement that Alice must make, is of such a fine-grained nature that carrying it out before the black hole evaporates may be impossible. Harlow and Hayden have found evidence that the limits of quantum computation do in fact prevent Alice from carrying out her experiment in less than exponential time. If their conjecture is correct then black hole complementarity may be alive and well. My aim here is to give an overview of the firewall argument, and its basis in the proximity postulate; as well as the counterargument based on computational complexity, as conjectured by Harlow and Hayden.
Cauchy problem for non-linear systems of equations in the critical case
NASA Astrophysics Data System (ADS)
Kaikina, E. I.; Naumkin, P. I.; Shishmarev, I. A.
2004-12-01
The large-time asymptotic behaviour is studied for a system of non-linear evolution dissipative equations \\displaystyle u_t+\\mathscr N(u,u)+\\mathscr Lu=0, \\qquad x\\in\\mathbb R^n, \\quad t>0, \\displaystyle u(0,x)=\\widetilde u(x), \\qquad x\\in\\mathbb R^n, where \\mathscr L is a linear pseudodifferential operator \\mathscr Lu=\\overline{\\mathscr F}_{\\xi\\to x}(L(\\xi)\\widehat u(\\xi)) and the non-linearity \\mathscr N is a quadratic pseudodifferential operator \\displaystyle \\mathscr N(u,u)=\\overline{\\mathscr F}_{\\xi\\to x}\\sum_{k,l=1}^m\\int_{\\mathbb R^n}A^{kl}(t,\\xi,y)\\widehat u_k(t,\\xi-y)\\widehat u_l(t,y)\\,dy,where \\widehat u\\equiv\\mathscr F_{x\\to\\xi}u is the Fourier transform. Under the assumptions that the initial data \\widetilde u\\in\\mathbf H^{\\beta,0}\\cap\\mathbf H^{0,\\beta}, \\beta>n/2 are sufficiently small, where \\displaystyle \\mathbf H^{n,m}=\\{\\phi\\in\\mathbf L^2:\\Vert\\langle x\\rangle^m\\lang......\\phi(x)\\Vert _{\\mathbf L^2}<\\infty\\}, \\qquad \\langle x\\rangle=\\sqrt{1+x^2}\\,,is a Sobolev weighted space, and that the total mass vector \\displaystyle M=\\int\\widetilde u(x)\\,dx\
NASA Astrophysics Data System (ADS)
Sanan, P.; Schnepp, S. M.; May, D.; Schenk, O.
2014-12-01
Geophysical applications require efficient forward models for non-linear Stokes flow on high resolution spatio-temporal domains. The bottleneck in applying the forward model is solving the linearized, discretized Stokes problem which takes the form of a large, indefinite (saddle point) linear system. Due to the heterogeniety of the effective viscosity in the elliptic operator, devising effective preconditioners for saddle point problems has proven challenging and highly problem-dependent. Nevertheless, at least three approaches show promise for preconditioning these difficult systems in an algorithmically scalable way using multigrid and/or domain decomposition techniques. The first is to work with a hierarchy of coarser or smaller saddle point problems. The second is to use the Schur complement method to decouple and sequentially solve for the pressure and velocity. The third is to use the Schur decomposition to devise preconditioners for the full operator. These involve sub-solves resembling inexact versions of the sequential solve. The choice of approach and sub-methods depends crucially on the motivating physics, the discretization, and available computational resources. Here we examine the performance trade-offs for preconditioning strategies applied to idealized models of mantle convection and lithospheric dynamics, characterized by large viscosity gradients. Due to the arbitrary topological structure of the viscosity field in geodynamical simulations, we utilize low order, inf-sup stable mixed finite element spatial discretizations which are suitable when sharp viscosity variations occur in element interiors. Particular attention is paid to possibilities within the decoupled and approximate Schur complement factorization-based monolithic approaches to leverage recently-developed flexible, communication-avoiding, and communication-hiding Krylov subspace methods in combination with `heavy' smoothers, which require solutions of large per-node sub-problems, well
Linear and nonlinear pattern selection in Rayleigh-Benard stability problems
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
1993-01-01
A new algorithm is introduced to compute finite-amplitude states using primitive variables for Rayleigh-Benard convection on relatively coarse meshes. The algorithm is based on a finite-difference matrix-splitting approach that separates all physical and dimensional effects into one-dimensional subsets. The nonlinear pattern selection process for steady convection in an air-filled square cavity with insulated side walls is investigated for Rayleigh numbers up to 20,000. The internalization of disturbances that evolve into coherent patterns is investigated and transient solutions from linear perturbation theory are compared with and contrasted to the full numerical simulations.
A Longitudinal Solution to the Problem of Differential Linear Growth Patterns in Quasi-Experiments.
ERIC Educational Resources Information Center
Olejnik, Stephen; Porter, Andrew C.
Differential achievement growth patterns between comparison groups is a problem associated with data analysis in compensatory education programs. Children in greatest need of additional assistance, are usually assigned to the program rather than to an alternative treatment so that the comparison groups may vary in several ways, in addition to the…
ERIC Educational Resources Information Center
Stamovlasis, Dimitrios
2010-01-01
The aim of the present paper is two-fold. First, it attempts to support previous findings on the role of some psychometric variables, such as, M-capacity, the degree of field dependence-independence, logical thinking and the mobility-fixity dimension, on students' achievement in chemistry problem solving. Second, the paper aims to raise some…
ERIC Educational Resources Information Center
Lawrence, Virginia
No longer just a user of commercial software, the 21st century teacher is a designer of interactive software based on theories of learning. This software, a comprehensive study of straightline equations, enhances conceptual understanding, sketching, graphic interpretive and word problem solving skills as well as making connections to real-life and…
Fitting of dihedral terms in classical force fields as an analytic linear least-squares problem.
Hopkins, Chad W; Roitberg, Adrian E
2014-07-28
The derivation and optimization of most energy terms in modern force fields are aided by automated computational tools. It is therefore important to have algorithms to rapidly and precisely train large numbers of interconnected parameters to allow investigators to make better decisions about the content of molecular models. In particular, the traditional approach to deriving dihedral parameters has been a least-squares fit to target conformational energies through variational optimization strategies. We present a computational approach for simultaneously fitting force field dihedral amplitudes and phase constants which is analytic within the scope of the data set. This approach completes the optimal molecular mechanics representation of a quantum mechanical potential energy surface in a single linear least-squares fit by recasting the dihedral potential into a linear function in the parameters. We compare the resulting method to a genetic algorithm in terms of computational time and quality of fit for two simple molecules. As suggested in previous studies, arbitrary dihedral phases are only necessary when modeling chiral molecules, which include more than half of drugs currently in use, so we also examined a dihedral parametrization case for the drug amoxicillin and one of its stereoisomers where the target dihedral includes a chiral center. Asymmetric dihedral phases are needed in these types of cases to properly represent the quantum mechanical energy surface and to differentiate between stereoisomers about the chiral center.
Fan, Yurui; Huang, Guohe; Veawab, Amornvadee
2012-01-01
In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.
Black hole complementarity in gravity's rainbow
Gim, Yongwan; Kim, Wontae E-mail: wtkim@sogang.ac.kr
2015-05-01
To see how the gravity's rainbow works for black hole complementary, we evaluate the required energy for duplication of information in the context of black hole complementarity by calculating the critical value of the rainbow parameter in the certain class of the rainbow Schwarzschild black hole. The resultant energy can be written as the well-defined limit for the vanishing rainbow parameter which characterizes the deformation of the relativistic dispersion relation in the freely falling frame. It shows that the duplication of information in quantum mechanics could not be allowed below a certain critical value of the rainbow parameter; however, it might be possible above the critical value of the rainbow parameter, so that the consistent formulation in our model requires additional constraints or any other resolutions for the latter case.
Solution of non-linear inverse heat conduction problems using the method of lines
NASA Astrophysics Data System (ADS)
Taler, J.; Duda, P.
Two space marching methods for solving the one-dimensional nonlinear inverse heat conduction problems are presented. The temperature-dependent thermal properties and the boundary condition on the accessible part of the boundary of the body are known. Additional temperature measurements in time are taken with a sensor located in an arbitrary position within the solid, and the objective is to determine the surface temperature and heat flux on the remaining part of the unspecified boundary. The methods have the advantage that time derivatives are not replaced by finite differences and the good accuracy of the method results from an appropriate approximation of the first time derivative using smoothing polynomials. The extension of the first method presented in this study to higher dimensions inverse heat conduction problems is straightforward.
The nonconforming linear strain tetrahedron for a large deformation elasticity problem
NASA Astrophysics Data System (ADS)
Hansbo, Peter; Larsson, Fredrik
2016-08-01
In this paper we investigate the performance of the nonconforming linear strain tetrahedron element introduced by Hansbo (Comput Methods Appl Mech Eng 200(9-12):1311-1316, 2011; J Numer Methods Eng 91(10):1105-1114, 2012). This approximation uses midpoints of edges on tetrahedra in three dimensions with either point continuity or mean continuity along edges of the tetrahedra. Since it contains (rotated) bilinear terms it performs substantially better than the standard constant strain element in bending. It also allows for under-integration in the form of one point Gauss integration of volumetric terms in near incompressible situations. We combine under-integration of the volumetric terms with houglass stabilization for the isochoric terms.
A linear decomposition method for large optimization problems. Blueprint for development
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1982-01-01
A method is proposed for decomposing large optimization problems encountered in the design of engineering systems such as an aircraft into a number of smaller subproblems. The decomposition is achieved by organizing the problem and the subordinated subproblems in a tree hierarchy and optimizing each subsystem separately. Coupling of the subproblems is accounted for by subsequent optimization of the entire system based on sensitivities of the suboptimization problem solutions at each level of the tree to variables of the next higher level. A formalization of the procedure suitable for computer implementation is developed and the state of readiness of the implementation building blocks is reviewed showing that the ingredients for the development are on the shelf. The decomposition method is also shown to be compatible with the natural human organization of the design process of engineering systems. The method is also examined with respect to the trends in computer hardware and software progress to point out that its efficiency can be amplified by network computing using parallel processors.
NASA Astrophysics Data System (ADS)
Moroni, Giovanni; Syam, Wahyudin P.; Petrò, Stefano
2014-08-01
Product quality is a main concern today in manufacturing; it drives competition between companies. To ensure high quality, a dimensional inspection to verify the geometric properties of a product must be carried out. High-speed non-contact scanners help with this task, by both speeding up acquisition speed and increasing accuracy through a more complete description of the surface. The algorithms for the management of the measurement data play a critical role in ensuring both the measurement accuracy and speed of the device. One of the most fundamental parts of the algorithm is the procedure for fitting the substitute geometry to a cloud of points. This article addresses this challenge. Three relevant geometries are selected as case studies: a non-linear least-squares fitting of a circle, sphere and cylinder. These geometries are chosen in consideration of their common use in practice; for example the sphere is often adopted as a reference artifact for performance verification of a coordinate measuring machine (CMM) and a cylinder is the most relevant geometry for a pin-hole relation as an assembly feature to construct a complete functioning product. In this article, an improvement of the initial point guess for the Levenberg-Marquardt (LM) algorithm by employing a chaos optimization (CO) method is proposed. This causes a performance improvement in the optimization of a non-linear function fitting the three geometries. The results show that, with this combination, a higher quality of fitting results a smaller norm of the residuals can be obtained while preserving the computational cost. Fitting an ‘incomplete-point-cloud’, which is a situation where the point cloud does not cover a complete feature e.g. from half of the total part surface, is also investigated. Finally, a case study of fitting a hemisphere is presented.
NASA Technical Reports Server (NTRS)
Heaslet, Max A; Lomax, Harvard
1950-01-01
Following the introduction of the linearized partial differential equation for nonsteady three-dimensional compressible flow, general methods of solution are given for the two and three-dimensional steady-state and two-dimensional unsteady-state equations. It is also pointed out that, in the absence of thickness effects, linear theory yields solutions consistent with the assumptions made when applied to lifting-surface problems for swept-back plan forms at sonic speeds. The solutions of the particular equations are determined in all cases by means of Green's theorem, and thus depend on the use of Green's equivalent layer of sources, sinks, and doublets. Improper integrals in the supersonic theory are treated by means of Hadamard's "finite part" technique.
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
NASA Astrophysics Data System (ADS)
Hampel, Uwe; Freyer, Richard
1996-12-01
We present a reconstruction scheme which solves the inverse linear problem in optical absorption tomography for radially symmetric objects. This is a relevant geometry for optical diagnosis in soft tissues, e.g. breast, testis and even head. The algorithm utilizes an invariance property of the linear imaging operator in homogeneously scattering media. The inverse problem is solved in the Fourier space of the angular component leading to a considerable dimension reduction which allows to compute the inverse in a direct way using singular value decomposition. There are two major advantages of this approach. First the inverse operator can be stored in computer memory and the computation of the inverse problem comprises only a few matrix multiplications. This makes the algorithm very fast and suitable for parallel execution. On the other hand we obtain the spectrum of the imaging operator that allows conclusions about reconstruction limits in the presence of noise and gives a termination criterion for image synthesis. To demonstrate the capabilities of this scheme reconstruction results from synthetic and phantom data are presented.
NASA Astrophysics Data System (ADS)
Bonometto, Silvio A.; Mainini, Roberto; Macciò, Andrea V.
2015-10-01
In this first paper we discuss the linear theory and the background evolution of a new class of models we dub SCDEW: Strongly Coupled DE, plus WDM. In these models, WDM dominates today's matter density; like baryons, WDM is uncoupled. Dark energy is a scalar field Φ; its coupling to ancillary cold dark matter (CDM), whose today's density is ≪1 per cent, is an essential model feature. Such coupling, in fact, allows the formation of cosmic structures, in spite of very low WDM particle masses (˜100 eV). SCDEW models yield cosmic microwave background and linear large scale features substantially undistinguishable from ΛCDM, but thanks to the very low WDM masses they strongly alleviate ΛCDM issues on small scales, as confirmed via numerical simulations in the second associated paper. Moreover SCDEW cosmologies significantly ease the coincidence and fine tuning problems of ΛCDM and, by using a field theory approach, we also outline possible links with inflationary models. We also discuss a possible fading of the coupling at low redshifts which prevents non-linearities on the CDM component to cause computational problems. The (possible) low-z coupling suppression, its mechanism, and its consequences are however still open questions - not necessarily problems - for SCDEW models. The coupling intensity and the WDM particle mass, although being extra parameters in respect to ΛCDM, are found to be substantially constrained a priori so that, if SCDEW is the underlying cosmology, we expect most data to fit also ΛCDM predictions.
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.
2010-01-01
The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.
Extended cubic B-spline method for solving a linear system of second-order boundary value problems.
Heilat, Ahmed Salem; Hamid, Nur Nadiah Abd; Ismail, Ahmad Izani Md
2016-01-01
A method based on extended cubic B-spline is proposed to solve a linear system of second-order boundary value problems. In this method, two free parameters, [Formula: see text] and [Formula: see text], play an important role in producing accurate results. Optimization of these parameters are carried out and the truncation error is calculated. This method is tested on three examples. The examples suggest that this method produces comparable or more accurate results than cubic B-spline and some other methods. PMID:27547688
Extended cubic B-spline method for solving a linear system of second-order boundary value problems.
Heilat, Ahmed Salem; Hamid, Nur Nadiah Abd; Ismail, Ahmad Izani Md
2016-01-01
A method based on extended cubic B-spline is proposed to solve a linear system of second-order boundary value problems. In this method, two free parameters, [Formula: see text] and [Formula: see text], play an important role in producing accurate results. Optimization of these parameters are carried out and the truncation error is calculated. This method is tested on three examples. The examples suggest that this method produces comparable or more accurate results than cubic B-spline and some other methods.
Modeling Granular Materials as Compressible Non-Linear Fluids: Heat Transfer Boundary Value Problems
Massoudi, M.C.; Tran, P.X.
2006-01-01
We discuss three boundary value problems in the flow and heat transfer analysis in flowing granular materials: (i) the flow down an inclined plane with radiation effects at the free surface; (ii) the natural convection flow between two heated vertical walls; (iii) the shearing motion between two horizontal flat plates with heat conduction. It is assumed that the material behaves like a continuum, similar to a compressible nonlinear fluid where the effects of density gradients are incorporated in the stress tensor. For a fully developed flow the equations are simplified to a system of three nonlinear ordinary differential equations. The equations are made dimensionless and a parametric study is performed where the effects of various dimensionless numbers representing the effects of heat conduction, viscous dissipation, radiation, and so forth are presented.
Samet Y. Kadioglu; Robert R. Nourgaliev; Vincent A. Mousseau
2008-03-01
We perform a comparative study for the harmonic versus arithmetic averaging of the heat conduction coefficient when solving non-linear heat transfer problems. In literature, the harmonic average is the method of choice, because it is widely believed that the harmonic average is more accurate model. However, our analysis reveals that this is not necessarily true. For instance, we show a case in which the harmonic average is less accurate when a coarser mesh is used. More importantly, we demonstrated that if the boundary layers are finely resolved, then the harmonic and arithmetic averaging techniques are identical in the truncation error sense. Our analysis further reveals that the accuracy of these two techniques depends on how the physical problem is modeled.
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
NASA Astrophysics Data System (ADS)
Elizondo, D.; Cappelaere, B.; Faure, Ch.
2002-04-01
Emerging tools for automatic differentiation (AD) of computer programs should be of great benefit for the implementation of many derivative-based numerical methods such as those used for inverse modeling. The Odyssée software, one such tool for Fortran 77 codes, has been tested on a sample model that solves a 2D non-linear diffusion-type equation. Odyssée offers both the forward and the reverse differentiation modes, that produce the tangent and the cotangent models, respectively. The two modes have been implemented on the sample application. A comparison is made with a manually-produced differentiated code for this model (MD), obtained by solving the adjoint equations associated with the model's discrete state equations. Following a presentation of the methods and tools and of their relative advantages and drawbacks, the performances of the codes produced by the manual and automatic methods are compared, in terms of accuracy and of computing efficiency (CPU and memory needs). The perturbation method (finite-difference approximation of derivatives) is also used as a reference. Based on the test of Taylor, the accuracy of the two AD modes proves to be excellent and as high as machine precision permits, a good indication of Odyssée's capability to produce error-free codes. In comparison, the manually-produced derivatives (MD) sometimes appear to be slightly biased, which is likely due to the fact that a theoretical model (state equations) and a practical model (computer program) do not exactly coincide, while the accuracy of the perturbation method is very uncertain. The MD code largely outperforms all other methods in computing efficiency, a subject of current research for the improvement of AD tools. Yet these tools can already be of considerable help for the computer implementation of many numerical methods, avoiding the tedious task of hand-coding the differentiation of complex algorithms.
NASA Astrophysics Data System (ADS)
Foufoula-Georgiou, Efi; Schwenk, Jon; Tejedor, Alejandro
2015-04-01
Are the dynamics of meandering rivers non-linear? What information does the shape of an oxbow lake carry about its forming process? How to characterize self-dissimilar landscapes carrying the signature of larger-scale geologic or tectonic controls? Do we have proper frameworks for quantifying the topology and dynamics of deltaic systems? What can the structural complexity of river networks (erosional and depositional) reveal about their vulnerability and response to change? Can the structure and dynamics of river networks reveal potential hotspots of geomorphic change? All of the above problems are at the heart of understanding landscape evolution, relating process to structure and form, and developing methodologies for inferring how a system might respond to future changes. We argue that a new surge of rigorous methodologies is needed to address these problems. The innovations introduced herein are: (1) gradual wavelet reconstruction for depicting threshold nonlinearity (due to cutoffs) versus inherent nonlinearity (due to underlying dynamics) in river meandering, (2) graph theory for studying the topology and dynamics of deltaic river networks and their response to change, and (3) Lagrangian approaches combined with topology and non-linear dynamics for inferring sediment-driven hotspots of geomorphic change.
Complementarity in temporal ghost interference and temporal quantum eraser
NASA Astrophysics Data System (ADS)
Cho, Kiyoung; Noh, Jaewoo
2015-06-01
We present a theory for the complementarity in temporal interference and quantum erasure. We consider the case of entangled biphoton where we can get the information of single photon's arrival time without making a disturbing measurement. We find a mathematical equation for the complementary relation for a temporal double slit experiment. We also propose a quantum eraser scheme that will elucidate that the complementarity is originated from the quantum entanglement.
Complementarity in Generic Open Quantum Systems
NASA Astrophysics Data System (ADS)
Banerjee, Subhashish; Srikanth, R.
We develop a unified, information theoretic interpretation of the number-phase complementarity that is applicable both to finite-dimensional (atomic) and infinite-dimensional (oscillator) systems, with number treated as a discrete Hermitian observable and phase as a continuous positive operator valued measure (POVM). The relevant uncertainty principle is obtained as a lower bound on entropy excess, X, the difference between the entropy of one variable, typically the number, and the knowledge of its complementary variable, typically the phase, where knowledge of a variable is defined as its relative entropy with respect to the uniform distribution. In the case of finite-dimensional systems, a weighting of phase knowledge by a factor μ (> 1) is necessary in order to make the bound tight, essentially on account of the POVM nature of phase as defined here. Numerical and analytical evidence suggests that μ tends to 1 as the system dimension becomes infinite. We study the effect of non-dissipative and dissipative noise on these complementary variables for an oscillator as well as atomic systems.
A toy model of black hole complementarity
NASA Astrophysics Data System (ADS)
Banerjee, Souvik; Bryan, Jan-Willem; Papadodimas, Kyriakos; Raju, Suvrat
2016-05-01
We consider the algebra of simple operators defined in a time band in a CFT with a holographic dual. When the band is smaller than the light crossing time of AdS, an entire causal diamond in the center of AdS is separated from the band by a horizon. We show that this algebra obeys a version of the Reeh-Schlieder theorem: the action of the algebra on the CFT vacuum can approximate any low energy state in the CFT arbitrarily well, but no operator within the algebra can exactly annihilate the vacuum. We show how to relate local excitations in the complement of the central diamond to simple operators in the band. Local excitations within the diamond are invisible to the algebra of simple operators in the band by causality, but can be related to complicated operators called "precursors". We use the Reeh-Schlieder theorem to write down a simple and explicit formula for these precursors on the boundary. We comment on the implications of our results for black hole complementarity and the emergence of bulk locality from the boundary.
Quark lepton complementarity and renormalization group effects
Schmidt, Michael A.; Smirnov, Alexei Yu.
2006-12-01
We consider a scenario for the quark-lepton complementarity relations between mixing angles in which the bimaximal mixing follows from the neutrino mass matrix. According to this scenario in the lowest order the angle {theta}{sub 12} is {approx}1{sigma} (1.5 degree sign -2 degree sign ) above the best fit point coinciding practically with the tribimaximal mixing prediction. Realization of this scenario in the context of the seesaw type-I mechanism with leptonic Dirac mass matrices approximately equal to the quark mass matrices is studied. We calculate the renormalization group corrections to {theta}{sub 12} as well as to {theta}{sub 13} in the standard model (SM) and minimal supersymmetric standard model (MSSM). We find that in a large part of the parameter space corrections {delta}{theta}{sub 12} are small or negligible. In the MSSM version of the scenario, the correction {delta}{theta}{sub 12} is in general positive. Small negative corrections appear in the case of an inverted mass hierarchy and opposite CP parities of {nu}{sub 1} and {nu}{sub 2} when leading contributions to {theta}{sub 12} running are strongly suppressed. The corrections are negative in the SM version in a large part of the parameter space for values of the relative CP phase of {nu}{sub 1} and {nu}{sub 2}: {phi}>{pi}/2.
Causal patch complementarity: The inside story for old black holes
NASA Astrophysics Data System (ADS)
Ilgin, Irfan; Yang, I.-Sheng
2014-02-01
We carefully analyze the causal patches which belong to observers falling into an old black hole. We show that without a distillation-like process, the Almheiri-Marolf-Polchinski-Sully (AMPS) paradox cannot challenge complementarity. That is because the two ingredients for the paradox, the interior region and the early Hawking radiation, cannot be spacelike separated and both low energy within any single causal patch. Either the early quanta have Planckian wavelengths, or the interior region is exponentially smaller than the Schwarzschild size. This means that their appearances in the low-energy theory are strictly timelike separated, which nullifies the problem of double entanglement/purity or quantum cloning. This verifies that the AMPS paradox is either only a paradox in the global description like the original information paradox, or a direct consequence of the assumption that a distillation process is feasible without hidden consequences. We discuss possible relations to cosmological causal patches and the possibility of transferring energy without transferring quantum information.
NASA Astrophysics Data System (ADS)
Barutello, Vivina; Jadanza, Riccardo D.; Portaluri, Alessandro
2016-01-01
It is well known that the linear stability of the Lagrangian elliptic solutions in the classical planar three-body problem depends on a mass parameter β and on the eccentricity e of the orbit. We consider only the circular case ( e = 0) but under the action of a broader family of singular potentials: α-homogeneous potentials, for α in (0, 2), and the logarithmic one. It turns out indeed that the Lagrangian circular orbit persists also in this more general setting. We discover a region of linear stability expressed in terms of the homogeneity parameter α and the mass parameter β, then we compute the Morse index of this orbit and of its iterates and we find that the boundary of the stability region is the envelope of a family of curves on which the Morse indices of the iterates jump. In order to conduct our analysis we rely on a Maslov-type index theory devised and developed by Y. Long, X. Hu and S. Sun; a key role is played by an appropriate index theorem and by some precise computations of suitable Maslov-type indices.
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Paraschivoiu, Marius
1998-01-01
We present a finite element technique for the efficient generation of lower and upper bounds to outputs which are linear functionals of the solutions to the incompressible Stokes equations in two space dimensions; the finite element discretization is effected by Crouzeix-Raviart elements, the discontinuous pressure approximation of which is central to our approach. The bounds are based upon the construction of an augmented Lagrangian: the objective is a quadratic "energy" reformulation of the desired output; the constraints are the finite element equilibrium equations (including the incompressibility constraint), and the intersubdomain continuity conditions on velocity. Appeal to the dual max-min problem for appropriately chosen candidate Lagrange multipliers then yields inexpensive bounds for the output associated with a fine-mesh discretization; the Lagrange multipliers are generated by exploiting an associated coarse-mesh approximation. In addition to the requisite coarse-mesh calculations, the bound technique requires solution only of local subdomain Stokes problems on the fine-mesh. The method is illustrated for the Stokes equations, in which the outputs of interest are the flowrate past, and the lift force on, a body immersed in a channel.
NASA Technical Reports Server (NTRS)
Lee, Y. M.
1971-01-01
Using a linearized theory of thermally and mechanically interacting mixture of linear elastic solid and viscous fluid, we derive a fundamental relation in an integral form called a reciprocity relation. This reciprocity relation relates the solution of one initial-boundary value problem with a given set of initial and boundary data to the solution of a second initial-boundary value problem corresponding to a different initial and boundary data for a given interacting mixture. From this general integral relation, reciprocity relations are derived for a heat-conducting linear elastic solid, and for a heat-conducting viscous fluid. An initial-boundary value problem is posed and solved for the mixture of linear elastic solid and viscous fluid. With the aid of the Laplace transform and the contour integration, a real integral representation for the displacement of the solid constituent is obtained as one of the principal results of the analysis.
Addona, Davide
2015-08-15
We obtain weighted uniform estimates for the gradient of the solutions to a class of linear parabolic Cauchy problems with unbounded coefficients. Such estimates are then used to prove existence and uniqueness of the mild solution to a semi-linear backward parabolic Cauchy problem, where the differential equation is the Hamilton–Jacobi–Bellman equation of a suitable optimal control problem. Via backward stochastic differential equations, we show that the mild solution is indeed the value function of the controlled equation and that the feedback law is verified.
NASA Astrophysics Data System (ADS)
Helman, E. Udi
This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using
Comments on black holes I: the possibility of complementarity
NASA Astrophysics Data System (ADS)
Mathur, Samir D.; Turton, David
2014-01-01
We comment on a recent paper of Almheiri, Marolf, Polchinski and Sully who argue against black hole complementarity based on the claim that an infalling observer `burns' as he attempts to cross the horizon. We show that measurements made by an infalling observer outside the horizon are statistically identical for the cases of vacuum at the horizon and radiation emerging from a stretched horizon. This forces us to follow the dynamics all the way to the horizon, where we need to know the details of Planck-scale physics. We note that in string theory the fuzzball structure of microstates does not give any place to `continue through' this Planck regime. AMPS argue that interactions near the horizon preclude traditional complementarity. But the conjecture of `fuzzball complementarity' works in the opposite way: the infalling quantum is absorbed by the fuzzball surface, and it is the resulting dynamics that is conjectured to admit a complementary description.
Media complementarity and health information seeking in Puerto Rico.
Tian, Yan; Robinson, James D
2014-01-01
This investigation incorporates the Orientation1-Stimulus-Orientation2-Response model on the antecedents and outcomes of individual-level complementarity of media use in health information seeking. A secondary analysis of the Health Information National Trends Survey Puerto Rico data suggests that education and gender were positively associated with individual-level media complementarity of health information seeking, which, in turn, was positively associated with awareness of health concepts and organizations, and this awareness was positively associated with a specific health behavior: fruit and vegetable consumption. This study extends the research in media complementarity and health information use; it provides an integrative social psychological model empirically supported by the Health Information National Trends Survey Puerto Rico data.
Akcelik, Volkan; Flath, Pearl; Ghattas, Omar; Hill, Judith C; Van Bloemen Waanders, Bart; Wilcox, Lucas
2011-01-01
We consider the problem of estimating the uncertainty in large-scale linear statistical inverse problems with high-dimensional parameter spaces within the framework of Bayesian inference. When the noise and prior probability densities are Gaussian, the solution to the inverse problem is also Gaussian, and is thus characterized by the mean and covariance matrix of the posterior probability density. Unfortunately, explicitly computing the posterior covariance matrix requires as many forward solutions as there are parameters, and is thus prohibitive when the forward problem is expensive and the parameter dimension is large. However, for many ill-posed inverse problems, the Hessian matrix of the data misfit term has a spectrum that collapses rapidly to zero. We present a fast method for computation of an approximation to the posterior covariance that exploits the lowrank structure of the preconditioned (by the prior covariance) Hessian of the data misfit. Analysis of an infinite-dimensional model convection-diffusion problem, and numerical experiments on large-scale 3D convection-diffusion inverse problems with up to 1.5 million parameters, demonstrate that the number of forward PDE solves required for an accurate low-rank approximation is independent of the problem dimension. This permits scalable estimation of the uncertainty in large-scale ill-posed linear inverse problems at a small multiple (independent of the problem dimension) of the cost of solving the forward problem.
NASA Technical Reports Server (NTRS)
Hall, Philip
1989-01-01
Goertler vortices are thought to be the cause of transition in many fluid flows of practical importance. A review of the different stages of vortex growth is given. In the linear regime, nonparallel effects completely govern this growth, and parallel flow theories do not capture the essential features of the development of the vortices. A detailed comparison between the parallel and nonparallel theories is given and it is shown that at small vortex wavelengths, the parallel flow theories have some validity; otherwise nonparallel effects are dominant. New results for the receptivity problem for Goertler vortices are given; in particular vortices induced by free stream perturbations impinging on the leading edge of the walls are considered. It is found that the most dangerous mode of this type can be isolated and it's neutral curve is determined. This curve agrees very closely with the available experimental data. A discussion of the different regimes of growth of nonlinear vortices is also given. Again it is shown that, unless the vortex wavelength is small, nonparallel effects are dominant. Some new results for nonlinear vortices of 0(1) wavelengths are given and compared to experimental observations.
Information complementarity in multipartite quantum states and security in cryptography
NASA Astrophysics Data System (ADS)
Bera, Anindita; Kumar, Asutosh; Rakshit, Debraj; Prabhu, R.; SenDe, Aditi; Sen, Ujjwal
2016-03-01
We derive complementarity relations for arbitrary quantum states of multiparty systems of any number of parties and dimensions between the purity of a part of the system and several correlation quantities, including entanglement and other quantum correlations as well as classical and total correlations, of that part with the remainder of the system. We subsequently use such a complementarity relation between purity and quantum mutual information in the tripartite scenario to provide a bound on the secret key rate for individual attacks on a quantum key distribution protocol.
Challenges to Bohr's Wave-Particle Complementarity Principle
NASA Astrophysics Data System (ADS)
Rabinowitz, Mario
2013-02-01
Contrary to Bohr's complementarity principle, in 1995 Rabinowitz proposed that by using entangled particles from the source it would be possible to determine which slit a particle goes through while still preserving the interference pattern in the Young's two slit experiment. In 2000, Kim et al. used spontaneous parametric down conversion to prepare entangled photons as their source, and almost achieved this. In 2012, Menzel et al. experimentally succeeded in doing this. When the source emits entangled particle pairs, the traversed slit is inferred from measurement of the entangled particle's location by using triangulation. The violation of complementarity breaches the prevailing probabilistic interpretation of quantum mechanics, and benefits Bohm's pilot-wave theory.
Couple Complementarity and Similarity: A Review of the Literature.
ERIC Educational Resources Information Center
White, Stephen G.; Hatcher, Chris
1984-01-01
Examines couple complementarity and similarity, and their relationship to dyadic adjustment, from three perspectives: social/psychological research, clinical populations research, and the observations of family therapists. Methodological criticisms are discussed suggesting that the evidence for a relationship between similarity and…
Generalized uncertainty principle: implications for black hole complementarity
NASA Astrophysics Data System (ADS)
Chen, Pisin; Ong, Yen Chin; Yeom, Dong-han
2014-12-01
At the heart of the black hole information loss paradox and the firewall controversy lies the conflict between quantum mechanics and general relativity. Much has been said about quantum corrections to general relativity, but much less in the opposite direction. It is therefore crucial to examine possible corrections to quantum mechanics due to gravity. Indeed, the Heisenberg Uncertainty Principle is one profound feature of quantum mechanics, which nevertheless may receive correction when gravitational effects become important. Such generalized uncertainty principle [GUP] has been motivated from not only quite general considerations of quantum mechanics and gravity, but also string theoretic arguments. We examine the role of GUP in the context of black hole complementarity. We find that while complementarity can be violated by large N rescaling if one assumes only the Heisenberg's Uncertainty Principle, the application of GUP may save complementarity, but only if certain N -dependence is also assumed. This raises two important questions beyond the scope of this work, i.e., whether GUP really has the proposed form of N -dependence, and whether black hole complementarity is indeed correct.
Are 't Hooft indices constrained in preon models with complementarity\\?
NASA Astrophysics Data System (ADS)
Okamoto, Yuko
1989-03-01
We present a counterexample to the conjecture that the 't Hooft indices for composite models satisfying complementarity are bounded in magnitude by 1. The model is based on the metacolor group SU(9)MC with two preons in the representation 36 and two preons in the representation 126¯. We obtain the 't Hooft index 12 for this model.
NASA Astrophysics Data System (ADS)
Hawthorne, Bryant; Panchal, Jitesh H.
2014-07-01
A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.
ERIC Educational Resources Information Center
Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice
2015-01-01
The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…
Neural network for solving Nash equilibrium problem in application of multiuser power control.
He, Xing; Yu, Junzhi; Huang, Tingwen; Li, Chuandong; Li, Chaojie
2014-09-01
In this paper, based on an equivalent mixed linear complementarity problem, we propose a neural network to solve multiuser power control optimization problems (MPCOP), which is modeled as the noncooperative Nash game in modern digital subscriber line (DSL). If the channel crosstalk coefficients matrix is positive semidefinite, it is shown that the proposed neural network is stable in the sense of Lyapunov and global convergence to a Nash equilibrium, and the Nash equilibrium is unique if the channel crosstalk coefficients matrix is positive definite. Finally, simulation results on two numerical examples show the effectiveness and performance of the proposed neural network.
ERIC Educational Resources Information Center
Strickland, Tricia K.; Maccini, Paula
2013-01-01
We examined the effects of the Concrete-Representational-Abstract Integration strategy on the ability of secondary students with learning disabilities to multiply linear algebraic expressions embedded within contextualized area problems. A multiple-probe design across three participants was used. Results indicated that the integration of the…
NASA Technical Reports Server (NTRS)
Utku, S.
1969-01-01
A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.
Solving the Beam Bending Problem with an Unilateral Winkler Foundation
NASA Astrophysics Data System (ADS)
Machalová, Jitka; Netuka, Horymír
2011-09-01
Our work is going to deal with the bending of a beam resting on an unilateral elastic foundation and develops further the ideas from the article [5]. In some cases the beam has fixed connection with the foundation. Such problems are linear. However there are applications where the beam is not connected with the foundation. This so-called unilateral case represents an interesting nonlinear problem and cannot be solved by easy means. We propose here first a new formulation of this problem which is based upon the idea of a decomposition. This way we can convert the usual variational formulation of our problem to a saddle-point formulation. In the second part of this paper we will deal with a numerical solution using the finite element method. The system of equations for the saddle point is nonlinear and nondifferentiable. It can be handled by the transformation to a complementarity problem which is solved by the nonsmooth Newton method.
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
1993-01-01
The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.
Sexual complementarity between host humoral toxicity and soldier caste in a polyembryonic wasp
Uka, Daisuke; Sakamoto, Takuma; Yoshimura, Jin; Iwabuchi, Kikuo
2016-01-01
Defense against enemies is a type of natural selection considered fundamentally equivalent between the sexes. In reality, however, whether males and females differ in defense strategy is unknown. Multiparasitism necessarily leads to the problem of defense for a parasite (parasitoid). The polyembryonic parasitic wasp Copidosoma floridanum is famous for its larval soldiers’ ability to kill other parasites. This wasp also exhibits sexual differences not only with regard to the competitive ability of the soldier caste but also with regard to host immune enhancement. Female soldiers are more aggressive than male soldiers, and their numbers increase upon invasion of the host by other parasites. In this report, in vivo and in vitro competition assays were used to test whether females have a toxic humoral factor; if so, then its strength was compared with that of males. We found that females have a toxic factor that is much weaker than that of males. Our results imply sexual complementarity between host humoral toxicity and larval soldiers. We discuss how this sexual complementarity guarantees adaptive advantages for both males and females despite the one-sided killing of male reproductives by larval female soldiers in a mixed-sex brood. PMID:27385149
Sexual complementarity between host humoral toxicity and soldier caste in a polyembryonic wasp.
Uka, Daisuke; Sakamoto, Takuma; Yoshimura, Jin; Iwabuchi, Kikuo
2016-01-01
Defense against enemies is a type of natural selection considered fundamentally equivalent between the sexes. In reality, however, whether males and females differ in defense strategy is unknown. Multiparasitism necessarily leads to the problem of defense for a parasite (parasitoid). The polyembryonic parasitic wasp Copidosoma floridanum is famous for its larval soldiers' ability to kill other parasites. This wasp also exhibits sexual differences not only with regard to the competitive ability of the soldier caste but also with regard to host immune enhancement. Female soldiers are more aggressive than male soldiers, and their numbers increase upon invasion of the host by other parasites. In this report, in vivo and in vitro competition assays were used to test whether females have a toxic humoral factor; if so, then its strength was compared with that of males. We found that females have a toxic factor that is much weaker than that of males. Our results imply sexual complementarity between host humoral toxicity and larval soldiers. We discuss how this sexual complementarity guarantees adaptive advantages for both males and females despite the one-sided killing of male reproductives by larval female soldiers in a mixed-sex brood. PMID:27385149
NASA Technical Reports Server (NTRS)
Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.
1973-01-01
High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.
Saul Rosenzweig's purview: from experimenter/experimentee complementarity to idiodynamics.
Rosenzweig, Saul
2004-06-01
Following a brief personal biography, an exposition of Saul Rosenzweig's scientific contributions is presented. Starting in 1933 with experimenter/experimentee complementarity, this point of view was extended to implicit common factors in psychotherapy Rosenzweig (1936) then to the complementary pattern of the so-called schools of psychology Rosenzweig (1937). Similarly, converging approaches in personality theory emerged as another type of complementarity Rosenzweig (1944a). The three types of norms-nomothetic, demographic, and idiodynamic-within the range of dynamic human behavior were formulated and led to idiodynamics as a successor to personality theory. This formulation included the concept of the idioverse, defined as a self-creative and experiential population of events, which opened up a methodology (psychoarcheology) for reconstructing the creativity of outstanding scientific and artistic craftsmen like William James and Sigmund Freud, among psychologists, and Henry James, Herman Melville, and Nathaniel Hawthorne among writers of fiction.
A low composite scale preon model with complementarity
NASA Astrophysics Data System (ADS)
Geng, C. Q.; Marshak, R. E.
1987-12-01
We have constructed the first “realistic candidate” preon model with low composite scale satisfying complementarity between the Higgs and confining phases. The model is based on SU(4) metacolor and predicts four generations of ordinary quarks and leptons together with heavy neutrinos at the level of the standard gauge group SU(3) c × SU(2) L × U(1) Y . There are no exotic massless fermions. The global family group is SU(2)× U(1).
Sequence complementarity-driven nonenzymatic ligation of RNA.
Pino, Samanta; Costanzo, Giovanna; Giorgi, Alessandra; Di Mauro, Ernesto
2011-04-12
We report two reactions of RNA G:C sequences occurring nonenzymatically in water in the absence of any added cofactor or metal ion: (a) sequence complementarity-driven terminal ligation and (b) complementary sequence adaptor-driven multiple tandemization. The two abiotic reactions increase the chemical complexity of the resulting pool of RNA molecules and change the Shannon information of the initial population of sequences.
Benefits of integrating complementarity into priority threat management.
Chadés, Iadine; Nicol, Sam; van Leeuwen, Stephen; Walters, Belinda; Firn, Jennifer; Reeson, Andrew; Martin, Tara G; Carwardine, Josie
2015-04-01
Conservation decision tools based on cost-effectiveness analysis are used to assess threat management strategies for improving species persistence. These approaches rank alternative strategies by their benefit to cost ratio but may fail to identify the optimal sets of strategies to implement under limited budgets because they do not account for redundancies. We devised a multiobjective optimization approach in which the complementarity principle is applied to identify the sets of threat management strategies that protect the most species for any budget. We used our approach to prioritize threat management strategies for 53 species of conservation concern in the Pilbara, Australia. We followed a structured elicitation approach to collect information on the benefits and costs of implementing 17 different conservation strategies during a 3-day workshop with 49 stakeholders and experts in the biodiversity, conservation, and management of the Pilbara. We compared the performance of our complementarity priority threat management approach with a current cost-effectiveness ranking approach. A complementary set of 3 strategies: domestic herbivore management, fire management and research, and sanctuaries provided all species with >50% chance of persistence for $4.7 million/year over 20 years. Achieving the same result cost almost twice as much ($9.71 million/year) when strategies were selected by their cost-effectiveness ranks alone. Our results show that complementarity of management benefits has the potential to double the impact of priority threat management approaches.
Benefits of integrating complementarity into priority threat management.
Chadés, Iadine; Nicol, Sam; van Leeuwen, Stephen; Walters, Belinda; Firn, Jennifer; Reeson, Andrew; Martin, Tara G; Carwardine, Josie
2015-04-01
Conservation decision tools based on cost-effectiveness analysis are used to assess threat management strategies for improving species persistence. These approaches rank alternative strategies by their benefit to cost ratio but may fail to identify the optimal sets of strategies to implement under limited budgets because they do not account for redundancies. We devised a multiobjective optimization approach in which the complementarity principle is applied to identify the sets of threat management strategies that protect the most species for any budget. We used our approach to prioritize threat management strategies for 53 species of conservation concern in the Pilbara, Australia. We followed a structured elicitation approach to collect information on the benefits and costs of implementing 17 different conservation strategies during a 3-day workshop with 49 stakeholders and experts in the biodiversity, conservation, and management of the Pilbara. We compared the performance of our complementarity priority threat management approach with a current cost-effectiveness ranking approach. A complementary set of 3 strategies: domestic herbivore management, fire management and research, and sanctuaries provided all species with >50% chance of persistence for $4.7 million/year over 20 years. Achieving the same result cost almost twice as much ($9.71 million/year) when strategies were selected by their cost-effectiveness ranks alone. Our results show that complementarity of management benefits has the potential to double the impact of priority threat management approaches. PMID:25362843
Finite element analysis of 3D elastic-plastic frictional contact problem for Cosserat materials
NASA Astrophysics Data System (ADS)
Zhang, S.; Xie, Z. Q.; Chen, B. S.; Zhang, H. W.
2013-06-01
The objective of this paper is to develop a finite element model for 3D elastic-plastic frictional contact problem of Cosserat materials. Because 3D elastic-plastic frictional contact problems belong to the unspecified boundary problems with nonlinearities in both material and geometric forms, a large number of calculations are needed to obtain numerical results with high accuracy. Based on the parametric variational principle and the corresponding quadratic programming method for numerical simulation of frictional contact problems, a finite element model is developed for 3D elastic-plastic frictional contact analysis of Cosserat materials. The problems are finally reduced to linear complementarity problems (LCP). Numerical examples show the feasibility and importance of the developed model for analyzing the contact problems of structures with materials which have micro-polar characteristics.
Graphing the Model or Modeling the Graph? Not-so-Subtle Problems in Linear IS-LM Analysis.
ERIC Educational Resources Information Center
Alston, Richard M.; Chi, Wan Fu
1989-01-01
Outlines the differences between the traditional and modern theoretical models of demand for money. States that the two models are often used interchangeably in textbooks, causing ambiguity. Argues against the use of linear specifications that imply that income velocity can increase without limit and that autonomous components of aggregate demand…
Saptio-temporal complementarity of wind and solar power in India
NASA Astrophysics Data System (ADS)
Lolla, Savita; Baidya Roy, Somnath; Chowdhury, Sourangshu
2015-04-01
Wind and solar power are likely to be a part of the solution to the climate change problem. That is why they feature prominently in the energy policies of all industrial economies including India. One of the major hindrances that is preventing an explosive growth of wind and solar energy is the issue of intermittency. This is a major problem because in a rapidly moving economy, energy production must match the patterns of energy demand. Moreover, sudden increase and decrease in energy supply may destabilize the power grids leading to disruptions in power supply. In this work we explore if the patterns of variability in wind and solar energy availability can offset each other so that a constant supply can be guaranteed. As a first step, this work focuses on seasonal-scale variability for each of the 5 regional power transmission grids in India. Communication within each grid is better than communication between grids. Hence, it is assumed that the grids can switch sources relatively easily. Wind and solar resources are estimated using the MERRA Reanalysis data for the 1979-2013 period. Solar resources are calculated with a 20% conversion efficiency. Wind resources are estimated using a 2 MW turbine power curve. Total resources are obtained by optimizing location and number of wind/solar energy farms. Preliminary results show that the southern and western grids are more appropriate for cogeneration than the other grids. Many studies on wind-solar cogeneration have focused on temporal complementarity at local scale. However, this is one of the first studies to explore spatial complementarity over regional scales. This project may help accelerate renewable energy penetration in India by identifying regional grid(s) where the renewable energy intermittency problem can be minimized.
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
NASA Astrophysics Data System (ADS)
Turkin, Alexander; van Oijen, Antoine M.; Turkin, Anatoliy A.
2015-11-01
One-dimensional sliding along DNA as a means to accelerate protein target search is a well-known phenomenon occurring in various biological systems. Using a biomimetic approach, we have recently demonstrated the practical use of DNA-sliding peptides to speed up bimolecular reactions more than an order of magnitude by allowing the reactants to associate not only in the solution by three-dimensional (3D) diffusion, but also on DNA via one-dimensional (1D) diffusion [A. Turkin et al., Chem. Sci. (2015), 10.1039/C5SC03063C]. Here we present a mean-field kinetic model of a bimolecular reaction in a solution with linear extended sinks (e.g., DNA) that can intermittently trap molecules present in a solution. The model consists of chemical rate equations for mean concentrations of reacting species. Our model demonstrates that addition of linear traps to the solution can significantly accelerate reactant association. We show that at optimum concentrations of linear traps the 1D reaction pathway dominates in the kinetics of the bimolecular reaction; i.e., these 1D traps function as an assembly line of the reaction product. Moreover, we show that the association reaction on linear sinks between trapped reactants exhibits a nonclassical third-order behavior. Predictions of the model agree well with our experimental observations. Our model provides a general description of bimolecular reactions that are controlled by a combined 3D+1D mechanism and can be used to quantitatively describe both naturally occurring as well as biomimetic biochemical systems that reduce the dimensionality of search.
NASA Astrophysics Data System (ADS)
Tanaka, Hidefumi; Yamamoto, Yuhji
2016-05-01
Palaeointensity experiments were carried out to a sample collection from two sections of basalt lava flow sequences of Pliocene age in north central Iceland (Chron C2An) to further refine the knowledge of the behaviour of the palaeomagnetic field. Selection of samples was mainly based on their stability of remanence to thermal demagnetization as well as good reversibility in variations of magnetic susceptibility and saturation magnetization with temperature, which would indicate the presence of magnetite as a product of deuteric oxidation of titanomagnetite. Among 167 lava flows from two sections, 44 flows were selected for the Königsberger-Thellier-Thellier experiment in vacuum. In spite of careful pre-selection of samples, an Arai plot with two linear segments, or a concave-up appearance, was often encountered during the experiments. This non-ideal behaviour was probably caused by an irreversible change in the domain state of the magnetic grains of the pseudo-single-domain (PSD) range. This is assumed because an ideal linear plot was obtained in the second run of the palaeointensity experiment in which a laboratory thermoremanence acquired after the final step of the first run was used as a natural remanence. This experiment was conducted on six selected samples, and no clear difference between the magnetic grains of the experimented and pristine sister samples was found by scanning electron microscope and hysteresis measurements, that is, no occurrence of notable chemical/mineralogical alteration, suggesting that no change in the grain size distribution had occurred. Hence, the two-segment Arai plot was not caused by the reversible multidomain/PSD effect in which the curvature of the Arai plot is dependent on the grain size. Considering that the irreversible change in domain state must have affected data points at not only high temperatures but also low temperatures, fv ≥ 0.5 was adopted as one of the acceptance criteria where fv is a vectorially defined
Reinforcement learning in complementarity game and population dynamics.
Jost, Jürgen; Li, Wei
2014-02-01
We systematically test and compare different reinforcement learning schemes in a complementarity game [J. Jost and W. Li, Physica A 345, 245 (2005)] played between members of two populations. More precisely, we study the Roth-Erev, Bush-Mosteller, and SoftMax reinforcement learning schemes. A modified version of Roth-Erev with a power exponent of 1.5, as opposed to 1 in the standard version, performs best. We also compare these reinforcement learning strategies with evolutionary schemes. This gives insight into aspects like the issue of quick adaptation as opposed to systematic exploration or the role of learning rates.
Complementarity endures: no firewall for an infalling observer
NASA Astrophysics Data System (ADS)
Nomura, Yasunori; Varela, Jaime; Weinberg, Sean J.
2013-03-01
We argue that the complementarity picture, as interpreted as a reference frame change represented in quantum gravitational Hilbert space, does not suffer from the "firewall paradox" recently discussed by Almheiri, Marolf, Polchinski, and Sully. A quantum state described by a distant observer evolves unitarily, with the evolution law well approximated by semi-classical field equations in the region away from the (stretched) horizon. And yet, a classical infalling observer does not see a violation of the equivalence principle, and thus a firewall, at the horizon. The resolution of the paradox lies in careful considerations on how a (semi-)classical world arises in unitary quantum mechanics describing the whole universe/multiverse.
The methodological lesson of complementarity: Bohr’s naturalistic epistemology
NASA Astrophysics Data System (ADS)
Folse, H. J.
2014-12-01
Bohr’s intellectual journey began with the recognition that empirical phenomena implied the breakdown of classical mechanics in the atomic domain; this, in turn, led to his adoption of the ‘quantum postulate’ that justifies the ‘stationary states’ of his atomic model of 1913. His endeavor to develop a wider conceptual framework harmonizing both classical and quantum descriptions led to his proposal of the new methodological goals and standards of complementarity. Bohr’s claim that an empirical discovery can demand methodological revision justifies regarding his epistemological lesson as supporting a naturalistic epistemology.
NASA Technical Reports Server (NTRS)
Wong, P. K.
1975-01-01
The closely-related problems of designing reliable feedback stabilization strategy and coordinating decentralized feedbacks are considered. Two approaches are taken. A geometric characterization of the structure of control interaction (and its dual) was first attempted and a concept of structural homomorphism developed based on the idea of 'similarity' of interaction pattern. The idea of finding classes of individual feedback maps that do not 'interfere' with the stabilizing action of each other was developed by identifying the structural properties of nondestabilizing and LQ-optimal feedback maps. Some known stability properties of LQ-feedback were generalized and some partial solutions were provided to the reliable stabilization and decentralized feedback coordination problems. A concept of coordination parametrization was introduced, and a scheme for classifying different modes of decentralization (information, control law computation, on-line control implementation) in control systems was developed.
NASA Technical Reports Server (NTRS)
Bensoussan, A.; Delfour, M. C.; Mitter, S. K.
1976-01-01
Available published results are surveyed for a special class of infinite-dimensional control systems whose evolution is characterized by a semigroup of operators of class C subscript zero. Emphasis is placed on an approach that clarifies the system-theoretic relationship among controllability, stabilizability, stability, and the existence of a solution to an associated operator equation of the Riccati type. Formulation of the optimal control problem is reviewed along with the asymptotic behavior of solutions to a general system of equations and several theorems concerning L2 stability. Examples are briefly discussed which involve second-order parabolic systems, first-order hyperbolic systems, and distributed boundary control.
Cameron, M.K.; Fomel, S.B.; Sethian, J.A.
2009-01-01
In the present work we derive and study a nonlinear elliptic PDE coming from the problem of estimation of sound speed inside the Earth. The physical setting of the PDE allows us to pose only a Cauchy problem, and hence is ill-posed. However we are still able to solve it numerically on a long enough time interval to be of practical use. We used two approaches. The first approach is a finite difference time-marching numerical scheme inspired by the Lax-Friedrichs method. The key features of this scheme is the Lax-Friedrichs averaging and the wide stencil in space. The second approach is a spectral Chebyshev method with truncated series. We show that our schemes work because of (1) the special input corresponding to a positive finite seismic velocity, (2) special initial conditions corresponding to the image rays, (3) the fact that our finite-difference scheme contains small error terms which damp the high harmonics; truncation of the Chebyshev series, and (4) the need to compute the solution only for a short interval of time. We test our numerical scheme on a collection of analytic examples and demonstrate a dramatic improvement in accuracy in the estimation of the sound speed inside the Earth in comparison with the conventional Dix inversion. Our test on the Marmousi example confirms the effectiveness of the proposed approach.
Indivisibility, Complementarity and Ontology: A Bohrian Interpretation of Quantum Mechanics
NASA Astrophysics Data System (ADS)
Roldán-Charria, Jairo
2014-12-01
The interpretation of quantum mechanics presented in this paper is inspired by two ideas that are fundamental in Bohr's writings: indivisibility and complementarity. Further basic assumptions of the proposed interpretation are completeness, universality and conceptual economy. In the interpretation, decoherence plays a fundamental role for the understanding of measurement. A general and precise conception of complementarity is proposed. It is fundamental in this interpretation to make a distinction between ontological reality, constituted by everything that does not depend at all on the collectivity of human beings, nor on their decisions or limitations, nor on their existence, and empirical reality constituted by everything that not being ontological is, however, intersubjective. According to the proposed interpretation, neither the dynamical properties, nor the constitutive properties of microsystems like mass, charge and spin, are ontological. The properties of macroscopic systems and space-time are also considered to belong to empirical reality. The acceptance of the above mentioned conclusion does not imply a total rejection of the notion of ontological reality. In the paper, utilizing the Aristotelian ideas of general cause and potentiality, a relation between ontological reality and empirical reality is proposed. Some glimpses of ontological reality, in the form of what can be said about it, are finally presented.
ERIC Educational Resources Information Center
Rothe, J. Peter
This article focuses on the linkage between the quantitative and qualitative distance education research methods. The concept that serves as the conceptual link is termed "complementarity." The definition of complementarity emerges through a simulated study of FernUniversitat's mentors. The study shows that in the case of the mentors, educational…
ERIC Educational Resources Information Center
Laird, Heather; Vande Kemp, Hendrika
1987-01-01
Explored the level of family therapist complementarity in the early, middle and late stages of therapy performing a micro-analysis of Salvador Minuchin with one family in successful therapy. Level of therapist complementarity was signficantly greater in the early and late stages than in the middle stage, and was significantly correlated with…
Interpersonal Complementarity in the Mental Health Intake: A Mixed-Methods Study
ERIC Educational Resources Information Center
Rosen, Daniel C.; Miller, Alisa B.; Nakash, Ora; Halperin, Lucila; Alegria, Margarita
2012-01-01
The study examined which socio-demographic differences between clients and providers influenced interpersonal complementarity during an initial intake session; that is, behaviors that facilitate harmonious interactions between client and provider. Complementarity was assessed using blinded ratings of 114 videotaped intake sessions by trained…
Solving MPCC Problem with the Hyperbolic Penalty Function
NASA Astrophysics Data System (ADS)
Melo, Teófilo; Monteiro, M. Teresa T.; Matias, João
2011-09-01
The main goal of this work is to solve mathematical program with complementarity constraints (MPCC) using nonlinear programming techniques (NLP). An hyperbolic penalty function is used to solve MPCC problems by including the complementarity constraints in the penalty term. This penalty function [1] is twice continuously differentiable and combines features of both exterior and interior penalty methods. A set of AMPL problems from MacMPEC [2] are tested and a comparative study is performed.
Reinforcement learning in complementarity game and population dynamics
NASA Astrophysics Data System (ADS)
Jost, Jürgen; Li, Wei
2014-02-01
We systematically test and compare different reinforcement learning schemes in a complementarity game [J. Jost and W. Li, Physica A 345, 245 (2005), 10.1016/j.physa.2004.07.005] played between members of two populations. More precisely, we study the Roth-Erev, Bush-Mosteller, and SoftMax reinforcement learning schemes. A modified version of Roth-Erev with a power exponent of 1.5, as opposed to 1 in the standard version, performs best. We also compare these reinforcement learning strategies with evolutionary schemes. This gives insight into aspects like the issue of quick adaptation as opposed to systematic exploration or the role of learning rates.
Interference and complementarity for two-photon hybrid entangled states
Nogueira, W. A. T.; Santibanez, M.; Delgado, A.; Saavedra, C.; Neves, L.; Lima, G.; Padua, S.
2010-10-15
In this work we generate two-photon hybrid entangled states (HESs), where the polarization of one photon is entangled with the transverse spatial degree of freedom of the second photon. The photon pair is created by parametric down-conversion in a polarization-entangled state. A birefringent double-slit couples the polarization and spatial degrees of freedom of these photons, and finally, suitable spatial and polarization projections generate the HES. We investigate some interesting aspects of the two-photon hybrid interference and present this study in the context of the complementarity relation that exists between the visibility of the one-photon and that of the two-photon interference patterns.
Complementarity of quantum discord and classically accessible information
Zwolak, Michael P.; Zurek, Wojciech H.
2013-05-20
The sum of the Holevo quantity (that bounds the capacity of quantum channels to transmit classical information about an observable) and the quantum discord (a measure of the quantumness of correlations of that observable) yields an observable-independent total given by the quantum mutual information. This split naturally delineates information about quantum systems accessible to observers – information that is redundantly transmitted by the environment – while showing that it is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. In addition, we prove an anti-symmetry property relating accessible information and discord. It shows that information becomes objective – accessible to many observers – only as quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. Lastly, the resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality while flagrantly quantum superpositions are out of reach.
Complementarity of quantum discord and classically accessible information
Zwolak, Michael P.; Zurek, Wojciech H.
2013-05-20
The sum of the Holevo quantity (that bounds the capacity of quantum channels to transmit classical information about an observable) and the quantum discord (a measure of the quantumness of correlations of that observable) yields an observable-independent total given by the quantum mutual information. This split naturally delineates information about quantum systems accessible to observers – information that is redundantly transmitted by the environment – while showing that it is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. In addition, we prove an anti-symmetry property relating accessible information and discord. Itmore » shows that information becomes objective – accessible to many observers – only as quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. Lastly, the resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality while flagrantly quantum superpositions are out of reach.« less
Complementarity of Neutrinoless Double Beta Decay and Cosmology
Dodelson, Scott; Lykken, Joseph
2014-03-20
Neutrinoless double beta decay experiments constrain one combination of neutrino parameters, while cosmic surveys constrain another. This complementarity opens up an exciting range of possibilities. If neutrinos are Majorana particles, and the neutrino masses follow an inverted hierarchy, then the upcoming sets of both experiments will detect signals. The combined constraints will pin down not only the neutrino masses but also constrain one of the Majorana phases. If the hierarchy is normal, then a beta decay detection with the upcoming generation of experiments is unlikely, but cosmic surveys could constrain the sum of the masses to be relatively heavy, thereby producing a lower bound for the neutrinoless double beta decay rate, and therefore an argument for a next generation beta decay experiment. In this case as well, a combination of the phases will be constrained.
Complementarity of information and the emergence of the classical world
NASA Astrophysics Data System (ADS)
Zwolak, Michael; Zurek, Wojciech
2013-03-01
We prove an anti-symmetry property relating accessible information about a system through some auxiliary system F and the quantum discord with respect to a complementary system F'. In Quantum Darwinism, where fragments of the environment relay information to observers - this relation allows us to understand some fundamental properties regarding correlations between a quantum system and its environment. First, it relies on a natural separation of accessible information and quantum information about a system. Under decoherence, this separation shows that accessible information is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. Second, It shows that objective information becomes accessible to many observers only when quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. The resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality, and supports Bohr's intuition that quantum phenomena acquire classical reality only when communicated.
Phenomenology and the life sciences: Clarifications and complementarities.
Sheets-Johnstone, Maxine
2015-12-01
This paper first clarifies phenomenology in ways essential to demonstrating its basic concern with Nature and its recognition of individual and cultural differences as well as commonalities. It furthermore clarifies phenomenological methodology in ways essential to understanding the methodology itself, its purpose, and its consequences. These clarifications show how phenomenology, by hewing to the dynamic realities of life itself and experiences of life itself, counters reductive thinking and "embodiments" of one kind and another. On the basis of these clarifications, the paper then turns to detailing conceptual complementarities between phenomenology and the life sciences, particularly highlighting studies in coordination dynamics. In doing so, it brings to light fundamental relationships such as those between mind and motion and between intrinsic dynamics and primal animation. It furthermore highlights the common concern with origins in both phenomenology and evolutionary biology: the history of how what is present is related to its inception in the past and to its transformations from past to present.
Complementarity of quantum discord and classically accessible information
Zwolak, Michael; Zurek, Wojciech H.
2013-01-01
The sum of the Holevo quantity (that bounds the capacity of quantum channels to transmit classical information about an observable) and the quantum discord (a measure of the quantumness of correlations of that observable) yields an observable-independent total given by the quantum mutual information. This split naturally delineates information about quantum systems accessible to observers – information that is redundantly transmitted by the environment – while showing that it is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. We also prove an anti-symmetry property relating accessible information and discord. It shows that information becomes objective – accessible to many observers – only as quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. The resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality while flagrantly quantum superpositions are out of reach.
Nurse practitioner and physician roles: delineation and complementarity of practice.
Davidson, R A; Lauver, D
1984-03-01
Because of differences in education and role preparation, nurse practitioners (NPs) and physicians (MDs) may assume either complementary or substitutive roles in patient care. To describe role complementarity and similarity, the role perceptions of 15 NPs and 15MDs in joint practice were assessed. Ten NP and MD respondent pairs were selected from a variety of ambulatory primary care settings (urban, rural, public, private, and health maintenance organizations), and five NP-MD pairs were chosen at random. A questionnaire of nine patient vignettes was created; respondents rated the appropriateness of their role in managing the clients described in each vignette using an 8-point scale. Significant differences existed between NP and MD perceived roles for six vignettes, p less than .05. NPs identified as highly role appropriate those vignettes necessitating psychosocial support and health education; MDs identified as highly role appropriate vignettes representing high risk physical conditions. The differences in NP and MD role perception were complementary. PMID:6565298
Positive effects of neighborhood complementarity on tree growth in a Neotropical forest.
Chen, Yuxin; Wright, S Joseph; Muller-Landau, Helene C; Hubbell, Stephen P; Wang, Yongfan; Yu, Shixiao
2016-03-01
Numerous grassland experiments have found evidence for a complementarity effect, an increase in productivity with higher plant species richness due to niche partitioning. However, empirical tests of complementarity in natural forests are rare. We conducted a spatially explicit analysis of 518 433 growth records for 274 species from a 50-ha tropical forest plot to test neighborhood complementarity, the idea that a tree grows faster when it is surrounded by more dissimilar neighbors. We found evidence for complementarity: focal tree growth rates increased by 39.8% and 34.2% with a doubling of neighborhood multi-trait dissimilarity and phylogenetic dissimilarity, respectively. Dissimilarity from neighbors in maximum height had the most important effect on tree growth among the six traits examined, and indeed, its effect trended much larger than that of the multitrait dissimilarity index. Neighborhood complementarity effects were strongest for light-demanding species, and decreased in importance with increasing shade tolerance of the focal individuals. Simulations demonstrated that the observed neighborhood complementarities were sufficient to produce positive stand-level biodiversity-productivity relationships. We conclude that neighborhood complementarity is important for productivity in this tropical forest, and that scaling down to individual-level processes can advance our understanding of the mechanisms underlying stand-level biodiversity-productivity relationships.
Positive effects of neighborhood complementarity on tree growth in a Neotropical forest.
Chen, Yuxin; Wright, S Joseph; Muller-Landau, Helene C; Hubbell, Stephen P; Wang, Yongfan; Yu, Shixiao
2016-03-01
Numerous grassland experiments have found evidence for a complementarity effect, an increase in productivity with higher plant species richness due to niche partitioning. However, empirical tests of complementarity in natural forests are rare. We conducted a spatially explicit analysis of 518 433 growth records for 274 species from a 50-ha tropical forest plot to test neighborhood complementarity, the idea that a tree grows faster when it is surrounded by more dissimilar neighbors. We found evidence for complementarity: focal tree growth rates increased by 39.8% and 34.2% with a doubling of neighborhood multi-trait dissimilarity and phylogenetic dissimilarity, respectively. Dissimilarity from neighbors in maximum height had the most important effect on tree growth among the six traits examined, and indeed, its effect trended much larger than that of the multitrait dissimilarity index. Neighborhood complementarity effects were strongest for light-demanding species, and decreased in importance with increasing shade tolerance of the focal individuals. Simulations demonstrated that the observed neighborhood complementarities were sufficient to produce positive stand-level biodiversity-productivity relationships. We conclude that neighborhood complementarity is important for productivity in this tropical forest, and that scaling down to individual-level processes can advance our understanding of the mechanisms underlying stand-level biodiversity-productivity relationships. PMID:27197403
NASA Astrophysics Data System (ADS)
Chyba, David Edward
This dissertation presents new results for the steady states of a detuned ring laser with a saturable absorber. The treatment is based on a semiclassical model which assumes homogeneously broadened two-level atoms. Part 1 presents a solution of the Maxwell-Bloch equations for the longitudinal dependence of the steady states of this system. The solution is then simplified by use of the mean field approximation. Graphical results in the mean field approximation are presented for squared electric field versus operating frequency, and for each of these versus cavity tuning and laser excitation. Various cavity linewidths and both resonant and non-resonant amplifier and absorber line center frequencies are considered. The most notable finding is that cavity detuning breaks the degeneracies previously found in the steady state solutions to the fully tuned case. This lead to the prediction that an actual system will bifurcate from the zero intensity solution to a steady state solution as laser excitation increases from zero, rather than to the small amplitude pulsations found for the model with mathematically exact tuning of the cavity and the media line centers. Other phenomena suggested by the steady state results include tuning-dependent hysteresis and bistability, and instability due to the appearance of another steady state solution. Results for the case in which the media have different line center frequencies suggest non-monotonic behavior of the electric field amplitude as laser excitation varies, as well as hysteresis and bistability. Part 2 presents a formulation of the linearized stability problem for the steady state solutions discussed in the first part. Thus the effects of detuning and the other parameters describing the system is incorporated into the stability analysis. The equations of the system are linearized about both the mean field steady states and about the longitudinally dependent steady states. Expansion in Fourier spatial modes is used in the
Rapid Online Analysis of Local Feature Detectors and Their Complementarity
Ehsan, Shoaib; Clark, Adrian F.; McDonald-Maier, Klaus D.
2013-01-01
A vision system that can assess its own performance and take appropriate actions online to maximize its effectiveness would be a step towards achieving the long-cherished goal of imitating humans. This paper proposes a method for performing an online performance analysis of local feature detectors, the primary stage of many practical vision systems. It advocates the spatial distribution of local image features as a good performance indicator and presents a metric that can be calculated rapidly, concurs with human visual assessments and is complementary to existing offline measures such as repeatability. The metric is shown to provide a measure of complementarity for combinations of detectors, correctly reflecting the underlying principles of individual detectors. Qualitative results on well-established datasets for several state-of-the-art detectors are presented based on the proposed measure. Using a hypothesis testing approach and a newly-acquired, larger image database, statistically-significant performance differences are identified. Different detector pairs and triplets are examined quantitatively and the results provide a useful guideline for combining detectors in applications that require a reasonable spatial distribution of image features. A principled framework for combining feature detectors in these applications is also presented. Timing results reveal the potential of the metric for online applications. PMID:23966187
Complementarity reveals bound entanglement of two twisted photons
NASA Astrophysics Data System (ADS)
Hiesmayr, Beatrix C.; Löffler, Wolfgang
2013-08-01
We demonstrate the detection of bipartite bound entanglement as predicted by the Horodecki's in 1998. Bound entangled states, being heavily mixed entangled quantum states, can be produced by incoherent addition of pure entangled states. Until 1998 it was thought that such mixing could always be reversed by entanglement distillation; however, this turned out to be impossible for bound entangled states. The purest form of bound entanglement is that of only two particles, which requires higher-dimensional (d > 2) quantum systems. We realize this using photon qutrit (d = 3) pairs produced by spontaneous parametric downconversion, that are entangled in the orbital angular momentum degrees of freedom, which is scalable to high dimensions. Entanglement of the photons is confirmed via a ‘maximum complementarity protocol’. This conceptually simple protocol requires only maximized complementary of measurement bases; we show that it can also detect bound entanglement. We explore the bipartite qutrit space and find that, also experimentally, a significant portion of the entangled states are actually bound entangled.
Complementarity of dark matter searches in the phenomenological MSSM
Cahill-Rowley, Matthew; Cotta, Randy; Drlica-Wagner, Alex; Funk, Stefan; Hewett, JoAnne; Ismail, Ahmed; Rizzo, Tom; Wood, Matthew
2015-03-11
As is well known, the search for and eventual identification of dark matter in supersymmetry requires a simultaneous, multipronged approach with important roles played by the LHC as well as both direct and indirect dark matter detection experiments. We examine the capabilities of these approaches in the 19-parameter phenomenological MSSM which provides a general framework for complementarity studies of neutralino dark matter. We summarize the sensitivity of dark matter searches at the 7 and 8 (and eventually 14) TeV LHC, combined with those by Fermi, CTA, IceCube/DeepCore, COUPP, LZ and XENON. The strengths and weaknesses of each of these techniques are examined and contrasted and their interdependent roles in covering the model parameter space are discussed in detail. We find that these approaches explore orthogonal territory and that advances in each are necessary to cover the supersymmetric weakly interacting massive particle parameter space. We also find that different experiments have widely varying sensitivities to the various dark matter annihilation mechanisms, some of which would be completely excluded by null results from these experiments.
Complementarity and Area-Efficiency in the Prioritization of the Global Protected Area Network
Kullberg, Peter; Toivonen, Tuuli; Montesino Pouzols, Federico; Lehtomäki, Joona; Di Minin, Enrico; Moilanen, Atte
2015-01-01
Complementarity and cost-efficiency are widely used principles for protected area network design. Despite the wide use and robust theoretical underpinnings, their effects on the performance and patterns of priority areas are rarely studied in detail. Here we compare two approaches for identifying the management priority areas inside the global protected area network: 1) a scoring-based approach, used in recently published analysis and 2) a spatial prioritization method, which accounts for complementarity and area-efficiency. Using the same IUCN species distribution data the complementarity method found an equal-area set of priority areas with double the mean species ranges covered compared to the scoring-based approach. The complementarity set also had 72% more species with full ranges covered, and lacked any coverage only for half of the species compared to the scoring approach. Protected areas in our complementarity-based solution were on average smaller and geographically more scattered. The large difference between the two solutions highlights the need for critical thinking about the selected prioritization method. According to our analysis, accounting for complementarity and area-efficiency can lead to considerable improvements when setting management priorities for the global protected area network. PMID:26678497
NASA Astrophysics Data System (ADS)
Giovannacci, D.; Detalle, V.; Martos-Levif, D.; Ogien, J.; Bernikola, E.; Tornari, V.; Hatzigiannakis, K.; Mouhoubi, K.; Bodnar, J.-L.; Walker, G.-C.; Brissaud, D.; Trichereau, B.; Jackson, B.; Bowen, J.
2015-06-01
The abbey's church of Chaalis, in the North of Paris, was founded by Louis VI as a Cistercian monastery on 10th January 1137. In 2013, in the frame the European Commission's 7th Framework Program project CHARISMA [grant agreement no. 228330] the chapel was used as a practical case-study for application of the work done in a task devoted to best practices in historical buildings and monuments. In the chapel, three areas were identified as relevant. The first area was used to make an exercise on diagnosis of the different deterioration patterns. The second area was used to analyze a restored area. The third one was selected to test some hypotheses on the possibility of using the portable instruments to answer some questions related to the deterioration problems. To inspect this area, different tools were used: -Visible fluorescence under UV, - THz system, - Stimulated Infra-Red Thermography, SIRT - Digital Holographic Speckle Pattern Interferometry, DHSPI - Condition report by conservator-restorer. The complementarity and synergy offered by the profitable use of the different integrated tools is clearly shown in this practical exercise.
Wiedemann, H.
1981-11-01
Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.
Accounting for complementarity to maximize monitoring power for species management.
Tulloch, Ayesha I T; Chadès, Iadine; Possingham, Hugh P
2013-10-01
To choose among conservation actions that may benefit many species, managers need to monitor the consequences of those actions. Decisions about which species to monitor from a suite of different species being managed are hindered by natural variability in populations and uncertainty in several factors: the ability of the monitoring to detect a change, the likelihood of the management action being successful for a species, and how representative species are of one another. However, the literature provides little guidance about how to account for these uncertainties when deciding which species to monitor to determine whether the management actions are delivering outcomes. We devised an approach that applies decision science and selects the best complementary suite of species to monitor to meet specific conservation objectives. We created an index for indicator selection that accounts for the likelihood of successfully detecting a real trend due to a management action and whether that signal provides information about other species. We illustrated the benefit of our approach by analyzing a monitoring program for invasive predator management aimed at recovering 14 native Australian mammals of conservation concern. Our method selected the species that provided more monitoring power at lower cost relative to the current strategy and traditional approaches that consider only a subset of the important considerations. Our benefit function accounted for natural variability in species growth rates, uncertainty in the responses of species to the prescribed action, and how well species represent others. Monitoring programs that ignore uncertainty, likelihood of detecting change, and complementarity between species will be more costly and less efficient and may waste funding that could otherwise be used for management.
Accounting for complementarity to maximize monitoring power for species management.
Tulloch, Ayesha I T; Chadès, Iadine; Possingham, Hugh P
2013-10-01
To choose among conservation actions that may benefit many species, managers need to monitor the consequences of those actions. Decisions about which species to monitor from a suite of different species being managed are hindered by natural variability in populations and uncertainty in several factors: the ability of the monitoring to detect a change, the likelihood of the management action being successful for a species, and how representative species are of one another. However, the literature provides little guidance about how to account for these uncertainties when deciding which species to monitor to determine whether the management actions are delivering outcomes. We devised an approach that applies decision science and selects the best complementary suite of species to monitor to meet specific conservation objectives. We created an index for indicator selection that accounts for the likelihood of successfully detecting a real trend due to a management action and whether that signal provides information about other species. We illustrated the benefit of our approach by analyzing a monitoring program for invasive predator management aimed at recovering 14 native Australian mammals of conservation concern. Our method selected the species that provided more monitoring power at lower cost relative to the current strategy and traditional approaches that consider only a subset of the important considerations. Our benefit function accounted for natural variability in species growth rates, uncertainty in the responses of species to the prescribed action, and how well species represent others. Monitoring programs that ignore uncertainty, likelihood of detecting change, and complementarity between species will be more costly and less efficient and may waste funding that could otherwise be used for management. PMID:24073812
Variationally consistent discretization schemes and numerical algorithms for contact problems
NASA Astrophysics Data System (ADS)
Wohlmuth, Barbara
We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of
A methodology to quantify and optimize time complementarity between hydropower and solar PV systems
NASA Astrophysics Data System (ADS)
Kougias, Ioannis; Szabó, Sándor; Monforti-Ferrario, Fabio; Huld, Thomas; Bódis, Katalin
2016-04-01
Hydropower and solar energy are expected to play a major role in achieving renewable energy sources' (RES) penetration targets. However, the integration of RES in the energy mix needs to overcome the technical challenges that are related to grid's operation. Therefore, there is an increasing need to explore approaches where different RES will operate under a synergetic approach. Ideally, hydropower and solar PV systems can be jointly developed in such systems where their electricity output profiles complement each other as much as possible and minimize the need for reserve capacities and storage costs. A straightforward way to achieve that is by optimizing the complementarity among RES systems both over time and spatially. The present research developed a methodology that quantifies the degree of time complementarity between small-scale hydropower stations and solar PV systems and examines ways to increase it. The methodology analyses high-resolution spatial and temporal data for solar radiation obtained from the existing PVGIS model (available online at: http://re.jrc.ec.europa.eu/pvgis/) and associates it with hydrological information of water inflows to a hydropower station. It builds on an exhaustive optimization algorithm that tests possible alterations of the PV system installation (azimuth, tilt) aiming to increase the complementarity, with minor compromises in the total solar energy output. The methodology has been tested in several case studies and the results indicated variations among regions and different hydraulic regimes. In some cases a small compromise in the solar energy output showed significant increases of the complementarity, while in other cases the effect is not that strong. Our contribution aims to present these findings in detail and initiate a discussion on the role and gains of increased complementarity between solar and hydropower energies. Reference: Kougias I, Szabó S, Monforti-Ferrario F, Huld T, Bódis K (2016). A methodology for
Self-Complementarity within Proteins: Bridging the Gap between Binding and Folding
Basu, Sankar; Bhattacharyya, Dhananjay; Banerjee, Rahul
2012-01-01
Complementarity, in terms of both shape and electrostatic potential, has been quantitatively estimated at protein-protein interfaces and used extensively to predict the specific geometry of association between interacting proteins. In this work, we attempted to place both binding and folding on a common conceptual platform based on complementarity. To that end, we estimated (for the first time to our knowledge) electrostatic complementarity (Em) for residues buried within proteins. Em measures the correlation of surface electrostatic potential at protein interiors. The results show fairly uniform and significant values for all amino acids. Interestingly, hydrophobic side chains also attain appreciable complementarity primarily due to the trajectory of the main chain. Previous work from our laboratory characterized the surface (or shape) complementarity (Sm) of interior residues, and both of these measures have now been combined to derive two scoring functions to identify the native fold amid a set of decoys. These scoring functions are somewhat similar to functions that discriminate among multiple solutions in a protein-protein docking exercise. The performances of both of these functions on state-of-the-art databases were comparable if not better than most currently available scoring functions. Thus, analogously to interfacial residues of protein chains associated (docked) with specific geometry, amino acids found in the native interior have to satisfy fairly stringent constraints in terms of both Sm and Em. The functions were also found to be useful for correctly identifying the same fold for two sequences with low sequence identity. Finally, inspired by the Ramachandran plot, we developed a plot of Sm versus Em (referred to as the complementarity plot) that identifies residues with suboptimal packing and electrostatics which appear to be correlated to coordinate errors. PMID:22713576
NASA Astrophysics Data System (ADS)
Kaloyerou, P. N.
2016-02-01
I argue that quantum optical experiments that purport to refute Bohr's principle of complementarity (BPC) fail in their aim. Some of these experiments try to refute complementarity by refuting the so called particle-wave duality relations, which evolved from the Wootters-Zurek reformulation of BPC (WZPC). I therefore consider it important for my forgoing arguments to first recall the essential tenets of BPC, and to clearly separate BPC from WZPC, which I will argue is a direct contradiction of BPC. This leads to a need to consider the meaning of particle-wave duality relations and to question their fundamental status. I further argue (albeit, in opposition to BPC) that particle and wave complementary concepts are on a different footing than other pairs of complementary concepts.
Portela, César; Afonso, Carlos M M; Pinto, Madalena M M; Ramos, Maria João
2003-09-01
One of the most important pharmacological mechanisms of antimalarial action is the inhibition of the aggregation of hematin into hemozoin. We present a group of new potential antimalarial molecules for which we have performed a DFT study of their stereoelectronic properties. Additionally, the same calculations were carried out for the two putative drug receptors involved in the referred activity, i.e., hematin mu-oxo dimer and hemozoin. A complementarity between the structural and electronic profiles of the planned molecules and the receptors can be observed. A docking study of the new compounds in relation to the two putative receptors is also presented, providing a correlation with the defined electrostatic complementarity.
Delayed-choice test of quantum complementarity with interfering single photons.
Jacques, Vincent; Wu, E; Grosshans, Frédéric; Treussart, François; Grangier, Philippe; Aspect, Alain; Roch, Jean-François
2008-06-01
We report an experimental test of quantum complementarity with single-photon pulses sent into a Mach-Zehnder interferometer with an output beam splitter of adjustable reflection coefficient R. In addition, the experiment is realized in Wheeler's delayed-choice regime. Each randomly set value of R allows us to observe interference with visibility V and to obtain incomplete which-path information characterized by the distinguishability parameter D. Measured values of V and D are found to fulfill the complementarity relation V2+D2 < or =1. PMID:18643406
Cundiff, Jenny M; Smith, Timothy W; Butner, Jonathan; Critchfield, Kenneth L; Nealey-Moore, Jill
2015-01-01
The principle of complementarity in interpersonal theory states that an actor's behavior tends to "pull, elicit, invite, or evoke" responses from interaction partners who are similar in affiliation (i.e., warmth vs. hostility) and opposite in control (i.e., dominance vs. submissiveness). Furthermore, complementary interactions are proposed to evoke less negative affect and promote greater relationship satisfaction. These predictions were examined in two studies of married couples. Results suggest that complementarity in affiliation describes a robust general pattern of marital interaction, but complementarity in control varies across contexts. Consistent with behavioral models of marital interaction, greater levels of affiliation and lower control by partners-not complementarity in affiliation or control-were associated with less anger and anxiety and greater relationship quality. Partners' levels of affiliation and control combined in ways other than complementarity-mostly additively, but sometimes synergistically-to predict negative affect and relationship satisfaction.
Cundiff, Jenny M; Smith, Timothy W; Butner, Jonathan; Critchfield, Kenneth L; Nealey-Moore, Jill
2015-01-01
The principle of complementarity in interpersonal theory states that an actor's behavior tends to "pull, elicit, invite, or evoke" responses from interaction partners who are similar in affiliation (i.e., warmth vs. hostility) and opposite in control (i.e., dominance vs. submissiveness). Furthermore, complementary interactions are proposed to evoke less negative affect and promote greater relationship satisfaction. These predictions were examined in two studies of married couples. Results suggest that complementarity in affiliation describes a robust general pattern of marital interaction, but complementarity in control varies across contexts. Consistent with behavioral models of marital interaction, greater levels of affiliation and lower control by partners-not complementarity in affiliation or control-were associated with less anger and anxiety and greater relationship quality. Partners' levels of affiliation and control combined in ways other than complementarity-mostly additively, but sometimes synergistically-to predict negative affect and relationship satisfaction. PMID:25367005
Linkage Rules for Plant–Pollinator Networks: Trait Complementarity or Exploitation Barriers?
Santamaría, Luis; Rodríguez-Gironés, Miguel A
2007-01-01
Recent attempts to examine the biological processes responsible for the general characteristics of mutualistic networks focus on two types of explanations: nonmatching biological attributes of species that prevent the occurrence of certain interactions (“forbidden links”), arising from trait complementarity in mutualist networks (as compared to barriers to exploitation in antagonistic ones), and random interactions among individuals that are proportional to their abundances in the observed community (“neutrality hypothesis”). We explored the consequences that simple linkage rules based on the first two hypotheses (complementarity of traits versus barriers to exploitation) had on the topology of plant–pollination networks. Independent of the linkage rules used, the inclusion of a small set of traits (two to four) sufficed to account for the complex topological patterns observed in real-world networks. Optimal performance was achieved by a “mixed model” that combined rules that link plants and pollinators whose trait ranges overlap (“complementarity models”) and rules that link pollinators to flowers whose traits are below a pollinator-specific barrier value (“barrier models”). Deterrence of floral parasites (barrier model) is therefore at least as important as increasing pollination efficiency (complementarity model) in the evolutionary shaping of plant–pollinator networks. PMID:17253905
Climate change mitigation and adaptation in the land use sector: from complementarity to synergy.
Duguma, Lalisa A; Minang, Peter A; van Noordwijk, Meine
2014-09-01
Currently, mitigation and adaptation measures are handled separately, due to differences in priorities for the measures and segregated planning and implementation policies at international and national levels. There is a growing argument that synergistic approaches to adaptation and mitigation could bring substantial benefits at multiple scales in the land use sector. Nonetheless, efforts to implement synergies between adaptation and mitigation measures are rare due to the weak conceptual framing of the approach and constraining policy issues. In this paper, we explore the attributes of synergy and the necessary enabling conditions and discuss, as an example, experience with the Ngitili system in Tanzania that serves both adaptation and mitigation functions. An in-depth look into the current practices suggests that more emphasis is laid on complementarity-i.e., mitigation projects providing adaptation co-benefits and vice versa rather than on synergy. Unlike complementarity, synergy should emphasize functionally sustainable landscape systems in which adaptation and mitigation are optimized as part of multiple functions. We argue that the current practice of seeking co-benefits (complementarity) is a necessary but insufficient step toward addressing synergy. Moving forward from complementarity will require a paradigm shift from current compartmentalization between mitigation and adaptation to systems thinking at landscape scale. However, enabling policy, institutional, and investment conditions need to be developed at global, national, and local levels to achieve synergistic goals.
Unitarity and fuzzball complementarity: "Alice fuzzes but may not even know it!"
NASA Astrophysics Data System (ADS)
Avery, Steven G.; Chowdhury, Borun D.; Puhm, Andrea
2013-09-01
We investigate the recent black hole firewall argument. For a black hole in a typical state we argue that unitarity requires every quantum of radiation leaving the black hole to carry information about the initial state. An information-free horizon is thus inconsistent with unitary at every step of the evaporation process. The required horizon-scale structure is manifest in the fuzzball proposal which provides a mechanism for holding up the structure. In this context we want to address the experience of an infalling observer and discuss the recent fuzzball complementarity proposal. Unlike black hole complementarity and observer complementarity which postulate asymptotic observers experience a hot membrane while infalling ones pass freely through the horizon, fuzzball complementarity postulates that fine-grained operators experience the details of the fuzzball microstate and coarse-grained operators experience the black hole. In particular, this implies that an in-falling detector tuned to energy E ~ T H , where T H is the asymptotic Hawking temperature, does not experience free infall while one tuned to E ≫ T H does.
Revisiting the quark-lepton complementarity and triminimal parametrization of neutrino mixing matrix
Kang, Sin Kyu
2011-05-01
We examine how a parametrization of neutrino mixing matrix reflecting quark-lepton complementarity can be probed by considering phase-averaged oscillation probabilities, flavor composition of neutrino fluxes coming from atmospheric and astrophysical neutrinos and lepton flavor violating radiative decays. We discuss some distinct features of the parametrization by comparing the triminimal parametrization of perturbations to the tribimaximal neutrino mixing matrix.
A Test of the Complementarity Hypothesis in A-B Research
ERIC Educational Resources Information Center
Lynch, Denis J.; And Others
1976-01-01
The complementarity hypothesis which suggests that A-type therapists be paired with B-type clients and vice versa was tested in an analogue study while several main effects of interest were found, the interaction of client and therapist characteristics was found to be in the reverse direction of expectation. (NG)
Is the firewall consistent? Gedanken experiments on black hole complementarity and firewall proposal
Hwang, Dong-il; Lee, Bum-Hoon; Yeom, Dong-han E-mail: bhl@sogang.ac.kr
2013-01-01
In this paper, we discuss the black hole complementarity and the firewall proposal at length. Black hole complementarity is inevitable if we assume the following five things: unitarity, entropy-area formula, existence of an information observer, semi-classical quantum field theory for an asymptotic observer, and the general relativity for an in-falling observer. However, large N rescaling and the AMPS argument show that black hole complementarity is inconsistent. To salvage the basic philosophy of the black hole complementarity, AMPS introduced a firewall around the horizon. According to large N rescaling, the firewall should be located close to the apparent horizon. We investigate the consistency of the firewall with the two critical conditions: the firewall should be near the time-like apparent horizon and it should not affect the future infinity. Concerning this, we have introduced a gravitational collapse with a false vacuum lump which can generate a spacetime structure with disconnected apparent horizons. This reveals a situation that there is a firewall outside of the event horizon, while the apparent horizon is absent. Therefore, the firewall, if it exists, not only does modify the general relativity for an in-falling observer, but also modify the semi-classical quantum field theory for an asymptotic observer.
ERIC Educational Resources Information Center
O'Toole, John; Dunn, Julie
2008-01-01
This article reports the findings of a research project that saw researchers from interaction design and drama education come together with a group of eleven and twelve year olds to investigate the current and future complementarity of computers and live classroom drama. The project was part of a pilot feasibility study commissioned by the…
Hernandez, Pauline; Picon-Cochard, Catherine
2016-01-01
Legume species promote productivity and increase the digestibility of herbage in grasslands. Considerable experimental data also indicate that communities with legumes produce more above-ground biomass than is expected from monocultures. While it has been attributed to N facilitation, evidence to identify the mechanisms involved is still lacking and the role of complementarity in soil water acquisition by vertical root differentiation remains unclear. We used a 20-months mesocosm experiment to investigate the effects of species richness (single species, two- and five-species mixtures) and functional diversity (presence of the legume Trifolium repens) on a set of traits related to light, N and water use and measured at community level. We found a positive effect of Trifolium presence and abundance on biomass production and complementarity effects in the two-species mixtures from the second year. In addition the community traits related to water and N acquisition and use (leaf area, N, water-use efficiency, and deep root growth) were higher in the presence of Trifolium. With a multiple regression approach, we showed that the traits related to water acquisition and use were with N the main determinants of biomass production and complementarity effects in diverse mixtures. At shallow soil layers, lower root mass of Trifolium and higher soil moisture should increase soil water availability for the associated grass species. Conversely at deep soil layer, higher root growth and lower soil moisture mirror soil resource use increase of mixtures. Altogether, these results highlight N facilitation but almost soil vertical differentiation and thus complementarity for water acquisition and use in mixtures with Trifolium. Contrary to grass-Trifolium mixtures, no significant over-yielding was measured for grass mixtures even those having complementary traits (short and shallow vs. tall and deep). Thus, vertical complementarity for soil resources uptake in mixtures was not only
Hernandez, Pauline; Picon-Cochard, Catherine
2016-01-01
Legume species promote productivity and increase the digestibility of herbage in grasslands. Considerable experimental data also indicate that communities with legumes produce more above-ground biomass than is expected from monocultures. While it has been attributed to N facilitation, evidence to identify the mechanisms involved is still lacking and the role of complementarity in soil water acquisition by vertical root differentiation remains unclear. We used a 20-months mesocosm experiment to investigate the effects of species richness (single species, two- and five-species mixtures) and functional diversity (presence of the legume Trifolium repens) on a set of traits related to light, N and water use and measured at community level. We found a positive effect of Trifolium presence and abundance on biomass production and complementarity effects in the two-species mixtures from the second year. In addition the community traits related to water and N acquisition and use (leaf area, N, water-use efficiency, and deep root growth) were higher in the presence of Trifolium. With a multiple regression approach, we showed that the traits related to water acquisition and use were with N the main determinants of biomass production and complementarity effects in diverse mixtures. At shallow soil layers, lower root mass of Trifolium and higher soil moisture should increase soil water availability for the associated grass species. Conversely at deep soil layer, higher root growth and lower soil moisture mirror soil resource use increase of mixtures. Altogether, these results highlight N facilitation but almost soil vertical differentiation and thus complementarity for water acquisition and use in mixtures with Trifolium. Contrary to grass-Trifolium mixtures, no significant over-yielding was measured for grass mixtures even those having complementary traits (short and shallow vs. tall and deep). Thus, vertical complementarity for soil resources uptake in mixtures was not only
Hernandez, Pauline; Picon-Cochard, Catherine
2016-01-01
Legume species promote productivity and increase the digestibility of herbage in grasslands. Considerable experimental data also indicate that communities with legumes produce more above-ground biomass than is expected from monocultures. While it has been attributed to N facilitation, evidence to identify the mechanisms involved is still lacking and the role of complementarity in soil water acquisition by vertical root differentiation remains unclear. We used a 20-months mesocosm experiment to investigate the effects of species richness (single species, two- and five-species mixtures) and functional diversity (presence of the legume Trifolium repens) on a set of traits related to light, N and water use and measured at community level. We found a positive effect of Trifolium presence and abundance on biomass production and complementarity effects in the two-species mixtures from the second year. In addition the community traits related to water and N acquisition and use (leaf area, N, water-use efficiency, and deep root growth) were higher in the presence of Trifolium. With a multiple regression approach, we showed that the traits related to water acquisition and use were with N the main determinants of biomass production and complementarity effects in diverse mixtures. At shallow soil layers, lower root mass of Trifolium and higher soil moisture should increase soil water availability for the associated grass species. Conversely at deep soil layer, higher root growth and lower soil moisture mirror soil resource use increase of mixtures. Altogether, these results highlight N facilitation but almost soil vertical differentiation and thus complementarity for water acquisition and use in mixtures with Trifolium. Contrary to grass-Trifolium mixtures, no significant over-yielding was measured for grass mixtures even those having complementary traits (short and shallow vs. tall and deep). Thus, vertical complementarity for soil resources uptake in mixtures was not only
Sidorin, Anatoly
2010-01-05
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
On the linear programming bound for linear Lee codes.
Astola, Helena; Tabus, Ioan
2016-01-01
Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.
Hydro-elastic complementarity in black branes at large D
NASA Astrophysics Data System (ADS)
Emparan, Roberto; Izumi, Keisuke; Luna, Raimon; Suzuki, Ryotaku; Tanabe, Kentaro
2016-06-01
We obtain the effective theory for the non-linear dynamics of black branes — both neutral and charged, in asymptotically flat or Anti-deSitter spacetimes — to leading order in the inverse-dimensional expansion. We find that black branes evolve as viscous fluids, but when they settle down they are more naturally viewed as solutions of an elastic soap-bubble theory. The two views are complementary: the same variable is regarded in one case as the energy density of the fluid, in the other as the deformation of the elastic membrane. The large- D theory captures finite-wavelength phenomena beyond the conventional reach of hydrodynamics. For asymptotically flat charged black branes (either Reissner-Nordstrom or p-brane-charged black branes) it yields the non-linear evolution of the Gregory-Laflamme instability at large D and its endpoint at stable non-uniform black branes. For Reissner-Nordstrom AdS black branes we find that sound perturbations do not propagate (have purely imaginary frequency) when their wavelength is below a certain charge-dependent value. We also study the polarization of black branes induced by an external electric field.
Shou, Guofa; Xia, Ling; Jiang, Mingfeng; Wei, Qing; Liu, Feng; Crozier, Stuart
2009-05-01
The boundary element method (BEM) is a commonly used numerical approach to solve biomedical electromagnetic volume conductor models such as ECG and EEG problems, in which only the interfaces between various tissue regions need to be modeled. The quality of the boundary element discretization affects the accuracy of the numerical solution, and the construction of high-quality meshes is time-consuming and always problem-dependent. Adaptive BEM (aBEM) has been developed and validated as an effective method to tackle such problems in electromagnetic and mechanical fields, but has not been extensively investigated in the ECG problem. In this paper, the h aBEM, which produces refined meshes through adaptive adjustment of the elements' connection, is investigated for the ECG forward problem. Two different refinement schemes: adding one new node (SH1) and adding three new nodes (SH3), are applied for the h aBEM calculation. In order to save the computational time, the h-hierarchical aBEM is also used through the introduction of the h-hierarchical shape functions for SH3. The algorithms were evaluated with a single-layer homogeneous sphere model with assumed dipole sources and a geometrically realistic heart-torso model. The simulations showed that h aBEM can produce better mesh results and is more accurate and effective than the traditional BEM for the ECG problem. While with the same refinement scheme SH3, the h-hierarchical aBEM can save the computational costs about 9% compared to the implementation of standard h aBEM.
NASA Astrophysics Data System (ADS)
Yamamoto, Akira; Yokoya, Kaoru
2015-02-01
An overview of linear collider programs is given. The history and technical challenges are described and the pioneering electron-positron linear collider, the SLC, is first introduced. For future energy frontier linear collider projects, the International Linear Collider (ILC) and the Compact Linear Collider (CLIC) are introduced and their technical features are discussed. The ILC is based on superconducting RF technology and the CLIC is based on two-beam acceleration technology. The ILC collaboration completed the Technical Design Report in 2013, and has come to the stage of "Design to Reality." The CLIC collaboration published the Conceptual Design Report in 2012, and the key technology demonstration is in progress. The prospects for further advanced acceleration technology are briefly discussed for possible long-term future linear colliders.
NASA Astrophysics Data System (ADS)
Yamamoto, Akira; Yokoya, Kaoru
An overview of linear collider programs is given. The history and technical challenges are described and the pioneering electron-positron linear collider, the SLC, is first introduced. For future energy frontier linear collider projects, the International Linear Collider (ILC) and the Compact Linear Collider (CLIC) are introduced and their technical features are discussed. The ILC is based on superconducting RF technology and the CLIC is based on two-beam acceleration technology. The ILC collaboration completed the Technical Design Report in 2013, and has come to the stage of "Design to Reality." The CLIC collaboration published the Conceptual Design Report in 2012, and the key technology demonstration is in progress. The prospects for further advanced acceleration technology are briefly discussed for possible long-term future linear colliders.
Jensen, Peter D; Zhang, Yuanji; Wiggins, B Elizabeth; Petrick, Jay S; Zhu, Jin; Kerstetter, Randall A; Heck, Gregory R; Ivashuta, Sergey I
2013-01-01
Long double-stranded RNAs (long dsRNAs) are precursors for the effector molecules of sequence-specific RNA-based gene silencing in eukaryotes. Plant cells can contain numerous endogenous long dsRNAs. This study demonstrates that such endogenous long dsRNAs in plants have sequence complementarity to human genes. Many of these complementary long dsRNAs have perfect sequence complementarity of at least 21 nucleotides to human genes; enough complementarity to potentially trigger gene silencing in targeted human cells if delivered in functional form. However, the number and diversity of long dsRNA molecules in plant tissue from crops such as lettuce, tomato, corn, soy and rice with complementarity to human genes that have a long history of safe consumption supports a conclusion that long dsRNAs do not present a significant dietary risk.
Plant diversity increases spatio-temporal niche complementarity in plant-pollinator interactions.
Venjakob, Christine; Klein, Alexandra-Maria; Ebeling, Anne; Tscharntke, Teja; Scherber, Christoph
2016-04-01
Ongoing biodiversity decline impairs ecosystem processes, including pollination. Flower visitation, an important indicator of pollination services, is influenced by plant species richness. However, the spatio-temporal responses of different pollinator groups to plant species richness have not yet been analyzed experimentally. Here, we used an experimental plant species richness gradient to analyze plant-pollinator interactions with an unprecedented spatio-temporal resolution. We observed four pollinator functional groups (honeybees, bumblebees, solitary bees, and hoverflies) in experimental plots at three different vegetation strata between sunrise and sunset. Visits were modified by plant species richness interacting with time and space. Furthermore, the complementarity of pollinator functional groups in space and time was stronger in species-rich mixtures. We conclude that high plant diversity should ensure stable pollination services, mediated via spatio-temporal niche complementarity in flower visitation. PMID:27069585
Plant diversity increases spatio-temporal niche complementarity in plant-pollinator interactions.
Venjakob, Christine; Klein, Alexandra-Maria; Ebeling, Anne; Tscharntke, Teja; Scherber, Christoph
2016-04-01
Ongoing biodiversity decline impairs ecosystem processes, including pollination. Flower visitation, an important indicator of pollination services, is influenced by plant species richness. However, the spatio-temporal responses of different pollinator groups to plant species richness have not yet been analyzed experimentally. Here, we used an experimental plant species richness gradient to analyze plant-pollinator interactions with an unprecedented spatio-temporal resolution. We observed four pollinator functional groups (honeybees, bumblebees, solitary bees, and hoverflies) in experimental plots at three different vegetation strata between sunrise and sunset. Visits were modified by plant species richness interacting with time and space. Furthermore, the complementarity of pollinator functional groups in space and time was stronger in species-rich mixtures. We conclude that high plant diversity should ensure stable pollination services, mediated via spatio-temporal niche complementarity in flower visitation.
Two chiral preon models with SU(N) metacolor satisfying complementarity
NASA Astrophysics Data System (ADS)
Geng, C. Q.; Marshak, R. E.
1987-04-01
We have constructed two chiral preon models based on the group SU(N)MC×SU(N+4)F×U(1)F (MC is gauged metacolor and F is global color flavor), the simplest (M=0) version of a class of models SU(N)MC×SU(N+M+4)F×SU(M)F×U(1)F 2 studied by bars and Yankielowicz. In contrast with earlier work, our models satisfy the principle of complementarity between the Higgs and confining phases. In one model, N=16 and four generations of ordinary quarks and leptons are found at the gauged SO(10) level. The second model predicts three quark-lepton families at the gauged SU(5) level without a right-handed neutrino. We also show that complementarity holds for the M≠0 models but that, for N=15 or 16, the results at the gauged level are identical with the M=0 case.
Wave-particle dualism and complementarity unraveled by a different mode.
Menzel, Ralf; Puhlmann, Dirk; Heuer, Axel; Schleich, Wolfgang P
2012-06-12
The precise knowledge of one of two complementary experimental outcomes prevents us from obtaining complete information about the other one. This formulation of Niels Bohr's principle of complementarity when applied to the paradigm of wave-particle dualism--that is, to Young's double-slit experiment--implies that the information about the slit through which a quantum particle has passed erases interference. In the present paper we report a double-slit experiment using two photons created by spontaneous parametric down-conversion where we observe interference in the signal photon despite the fact that we have located it in one of the slits due to its entanglement with the idler photon. This surprising aspect of complementarity comes to light by our special choice of the TEM(01) pump mode. According to quantum field theory the signal photon is then in a coherent superposition of two distinct wave vectors giving rise to interference fringes analogous to two mechanical slits.
A Structural Connection between Linear and 0-1 Integer Linear Formulations
ERIC Educational Resources Information Center
Adlakha, V.; Kowalski, K.
2007-01-01
The connection between linear and 0-1 integer linear formulations has attracted the attention of many researchers. The main reason triggering this interest has been an availability of efficient computer programs for solving pure linear problems including the transportation problem. Also the optimality of linear problems is easily verifiable…
Brown, Marion B; Schlacher, Thomas A; Schoeman, David S; Weston, Michael A; Huijbers, Chantal M; Olds, Andrew D; Connolly, Rod M
2015-10-01
Species composition is expected to alter ecological function in assemblages if species traits differ strongly. Such effects are often large and persistent for nonnative carnivores invading islands. Alternatively, high similarity in traits within assemblages creates a degree of functional redundancy in ecosystems. Here we tested whether species turnover results in functional ecological equivalence or complementarity, and whether invasive carnivores on islands significantly alter such ecological function. The model system consisted of vertebrate scavengers (dominated by raptors) foraging on animal carcasses on ocean beaches on two Australian islands, one with and one without invasive red foxes (Vulpes vulpes). Partitioning of scavenging events among species, carcass removal rates, and detection speeds were quantified using camera traps baited with fish carcasses at the dune-beach interface. Complete segregation of temporal foraging niches between mammals (nocturnal) and birds (diurnal) reflects complementarity in carrion utilization. Conversely, functional redundancy exists within the bird guild where several species of raptors dominate carrion removal in a broadly similar way. As predicted, effects of red foxes were large. They substantially changed the nature and rate of the scavenging process in the system: (1) foxes consumed over half (55%) of all carrion available at night, compared with negligible mammalian foraging at night on the fox-free island, and (2) significant shifts in the composition of the scavenger assemblages consuming beach-cast carrion are the consequence of fox invasion at one island. Arguably, in the absence of other mammalian apex predators, the addition of red foxes creates a new dimension of functional complementarity in beach food webs. However, this functional complementarity added by foxes is neither benign nor neutral, as marine carrion subsidies to coastal red fox populations are likely to facilitate their persistence as exotic
Complementarity among four highly productive grassland species depends on resource availability.
Roscher, Christiane; Schmid, Bernhard; Kolle, Olaf; Schulze, Ernst-Detlef
2016-06-01
Positive species richness-productivity relationships are common in biodiversity experiments, but how resource availability modifies biodiversity effects in grass-legume mixtures composed of highly productive species is yet to be explicitly tested. We addressed this question by choosing two grasses (Arrhenatherum elatius and Dactylis glomerata) and two legumes (Medicago × varia and Onobrychis viciifolia) which are highly productive in monocultures and dominant in mixtures (the Jena Experiment). We established monocultures, all possible two- and three-species mixtures, and the four-species mixture under three different resource supply conditions (control, fertilization, and shading). Compared to the control, community biomass production decreased under shading (-56 %) and increased under fertilization (+12 %). Net diversity effects (i.e., mixture minus mean monoculture biomass) were positive in the control and under shading (on average +15 and +72 %, respectively) and negative under fertilization (-10 %). Positive complementarity effects in the control suggested resource partitioning and facilitation of growth through symbiotic N2 fixation by legumes. Positive complementarity effects under shading indicated that resource partitioning is also possible when growth is carbon-limited. Negative complementarity effects under fertilization suggested that external nutrient supply depressed facilitative grass-legume interactions due to increased competition for light. Selection effects, which quantify the dominance of species with particularly high monoculture biomasses in the mixture, were generally small compared to complementarity effects, and indicated that these species had comparable competitive strengths in the mixture. Our study shows that resource availability has a strong impact on the occurrence of positive diversity effects among tall and highly productive grass and legume species. PMID:26932467
Brown, Marion B; Schlacher, Thomas A; Schoeman, David S; Weston, Michael A; Huijbers, Chantal M; Olds, Andrew D; Connolly, Rod M
2015-10-01
Species composition is expected to alter ecological function in assemblages if species traits differ strongly. Such effects are often large and persistent for nonnative carnivores invading islands. Alternatively, high similarity in traits within assemblages creates a degree of functional redundancy in ecosystems. Here we tested whether species turnover results in functional ecological equivalence or complementarity, and whether invasive carnivores on islands significantly alter such ecological function. The model system consisted of vertebrate scavengers (dominated by raptors) foraging on animal carcasses on ocean beaches on two Australian islands, one with and one without invasive red foxes (Vulpes vulpes). Partitioning of scavenging events among species, carcass removal rates, and detection speeds were quantified using camera traps baited with fish carcasses at the dune-beach interface. Complete segregation of temporal foraging niches between mammals (nocturnal) and birds (diurnal) reflects complementarity in carrion utilization. Conversely, functional redundancy exists within the bird guild where several species of raptors dominate carrion removal in a broadly similar way. As predicted, effects of red foxes were large. They substantially changed the nature and rate of the scavenging process in the system: (1) foxes consumed over half (55%) of all carrion available at night, compared with negligible mammalian foraging at night on the fox-free island, and (2) significant shifts in the composition of the scavenger assemblages consuming beach-cast carrion are the consequence of fox invasion at one island. Arguably, in the absence of other mammalian apex predators, the addition of red foxes creates a new dimension of functional complementarity in beach food webs. However, this functional complementarity added by foxes is neither benign nor neutral, as marine carrion subsidies to coastal red fox populations are likely to facilitate their persistence as exotic
ERIC Educational Resources Information Center
Walkiewicz, T. A.; Newby, N. D., Jr.
1972-01-01
A discussion of linear collisions between two or three objects is related to a junior-level course in analytical mechanics. The theoretical discussion uses a geometrical approach that treats elastic and inelastic collisions from a unified point of view. Experiments with a linear air track are described. (Author/TS)
Caliman, Adriano; Carneiro, Luciana S; Leal, João J F; Farjalla, Vinicius F; Bozelli, Reinaldo L; Esteves, Francisco A
2012-01-01
Tests of the biodiversity and ecosystem functioning (BEF) relationship have focused little attention on the importance of interactions between species diversity and other attributes of ecological communities such as community biomass. Moreover, BEF research has been mainly derived from studies measuring a single ecosystem process that often represents resource consumption within a given habitat. Focus on single processes has prevented us from exploring the characteristics of ecosystem processes that can be critical in helping us to identify how novel pathways throughout BEF mechanisms may operate. Here, we investigated whether and how the effects of biodiversity mediated by non-trophic interactions among benthic bioturbator species vary according to community biomass and ecosystem processes. We hypothesized that (1) bioturbator biomass and species richness interact to affect the rates of benthic nutrient regeneration [dissolved inorganic nitrogen (DIN) and total dissolved phosphorus (TDP)] and consequently bacterioplankton production (BP) and that (2) the complementarity effects of diversity will be stronger on BP than on nutrient regeneration because the former represents a more integrative process that can be mediated by multivariate nutrient complementarity. We show that the effects of bioturbator diversity on nutrient regeneration increased BP via multivariate nutrient complementarity. Consistent with our prediction, the complementarity effects were significantly stronger on BP than on DIN and TDP. The effects of the biomass-species richness interaction on complementarity varied among the individual processes, but the aggregated measures of complementarity over all ecosystem processes were significantly higher at the highest community biomass level. Our results suggest that the complementarity effects of biodiversity can be stronger on more integrative ecosystem processes, which integrate subsidiary "simpler" processes, via multivariate complementarity. In
Caliman, Adriano; Carneiro, Luciana S.; Leal, João J. F.; Farjalla, Vinicius F.; Bozelli, Reinaldo L.; Esteves, Francisco A.
2012-01-01
Tests of the biodiversity and ecosystem functioning (BEF) relationship have focused little attention on the importance of interactions between species diversity and other attributes of ecological communities such as community biomass. Moreover, BEF research has been mainly derived from studies measuring a single ecosystem process that often represents resource consumption within a given habitat. Focus on single processes has prevented us from exploring the characteristics of ecosystem processes that can be critical in helping us to identify how novel pathways throughout BEF mechanisms may operate. Here, we investigated whether and how the effects of biodiversity mediated by non-trophic interactions among benthic bioturbator species vary according to community biomass and ecosystem processes. We hypothesized that (1) bioturbator biomass and species richness interact to affect the rates of benthic nutrient regeneration [dissolved inorganic nitrogen (DIN) and total dissolved phosphorus (TDP)] and consequently bacterioplankton production (BP) and that (2) the complementarity effects of diversity will be stronger on BP than on nutrient regeneration because the former represents a more integrative process that can be mediated by multivariate nutrient complementarity. We show that the effects of bioturbator diversity on nutrient regeneration increased BP via multivariate nutrient complementarity. Consistent with our prediction, the complementarity effects were significantly stronger on BP than on DIN and TDP. The effects of the biomass-species richness interaction on complementarity varied among the individual processes, but the aggregated measures of complementarity over all ecosystem processes were significantly higher at the highest community biomass level. Our results suggest that the complementarity effects of biodiversity can be stronger on more integrative ecosystem processes, which integrate subsidiary “simpler” processes, via multivariate complementarity. In
Climate Change Mitigation and Adaptation in the Land Use Sector: From Complementarity to Synergy
NASA Astrophysics Data System (ADS)
Duguma, Lalisa A.; Minang, Peter A.; van Noordwijk, Meine
2014-09-01
Currently, mitigation and adaptation measures are handled separately, due to differences in priorities for the measures and segregated planning and implementation policies at international and national levels. There is a growing argument that synergistic approaches to adaptation and mitigation could bring substantial benefits at multiple scales in the land use sector. Nonetheless, efforts to implement synergies between adaptation and mitigation measures are rare due to the weak conceptual framing of the approach and constraining policy issues. In this paper, we explore the attributes of synergy and the necessary enabling conditions and discuss, as an example, experience with the Ngitili system in Tanzania that serves both adaptation and mitigation functions. An in-depth look into the current practices suggests that more emphasis is laid on complementarity—i.e., mitigation projects providing adaptation co-benefits and vice versa rather than on synergy. Unlike complementarity, synergy should emphasize functionally sustainable landscape systems in which adaptation and mitigation are optimized as part of multiple functions. We argue that the current practice of seeking co-benefits (complementarity) is a necessary but insufficient step toward addressing synergy. Moving forward from complementarity will require a paradigm shift from current compartmentalization between mitigation and adaptation to systems thinking at landscape scale. However, enabling policy, institutional, and investment conditions need to be developed at global, national, and local levels to achieve synergistic goals.
Linearly Adjustable International Portfolios
NASA Astrophysics Data System (ADS)
Fonseca, R. J.; Kuhn, D.; Rustem, B.
2010-09-01
We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.
Lombaert, Eric; Guillemaud, Thomas; Lundgren, Jonathan; Koch, Robert; Facon, Benoît; Grez, Audrey; Loomans, Antoon; Malausa, Thibaut; Nedved, Oldrich; Rhule, Emma; Staverlokk, Arnstein; Steenberg, Tove; Estoup, Arnaud
2014-12-01
Inferences about introduction histories of invasive species remain challenging because of the stochastic demographic processes involved. Approximate Bayesian computation (ABC) can help to overcome these problems, but such method requires a prior understanding of population structure over the study area, necessitating the use of alternative methods and an intense sampling design. In this study, we made inferences about the worldwide invasion history of the ladybird Harmonia axyridis by various population genetics statistical methods, using a large set of sampling sites distributed over most of the species' native and invaded areas. We evaluated the complementarity of the statistical methods and the consequences of using different sets of site samples for ABC inferences. We found that the H. axyridis invasion has involved two bridgehead invasive populations in North America, which have served as the source populations for at least six independent introductions into other continents. We also identified several situations of genetic admixture between differentiated sources. Our results highlight the importance of coupling ABC methods with more traditional statistical approaches. We found that the choice of site samples could affect the conclusions of ABC analyses comparing possible scenarios. Approaches involving independent ABC analyses on several sample sets constitute a sensible solution, complementary to standard quality controls based on the analysis of pseudo-observed data sets, to minimize erroneous conclusions. This study provides biologists without expertise in this area with detailed methodological and conceptual guidelines for making inferences about invasion routes when dealing with a large number of sampling sites and complex population genetic structures.
de Albuquerque, Fábio Suzart; Beier, Paul
2016-06-01
Given species inventories of all sites in a planning area, integer programming or heuristic algorithms can prioritize sites in terms of the site's complementary value, that is, the ability of the site to complement (add unrepresented species to) other sites prioritized for conservation. The utility of these procedures is limited because distributions of species are typically available only as coarse atlases or range maps, whereas conservation planners need to prioritize relatively small sites. If such coarse-resolution information can be used to identify small sites that efficiently represent species (i.e., downscaled), then such data can be useful for conservation planning. We develop and test a new type of surrogate for biodiversity, which we call downscaled complementarity. In this approach, complementarity values from large cells are downscaled to small cells, using statistical methods or simple map overlays. We illustrate our approach for birds in Spain by building models at coarse scale (50 × 50 km atlas of European birds, and global range maps of birds interpreted at the same 50 × 50 km grid size), using this model to predict complementary value for 10 × 10 km cells in Spain, and testing how well-prioritized cells represented bird distributions in an independent bird atlas of those 10 × 10 km cells. Downscaled complementarity was about 63-77% as effective as having full knowledge of the 10-km atlas data in its ability to improve on random selection of sites. Downscaled complementarity has relatively low data acquisition cost and meets representation goals well compared with other surrogates currently in use. Our study justifies additional tests to determine whether downscaled complementarity is an effective surrogate for other regions and taxa, and at spatial resolution finer than 10 × 10 km cells. Until such tests have been completed, we caution against assuming that any surrogate can reliably prioritize sites for species representation
Complementarity of Historic Building Information Modelling and Geographic Information Systems
NASA Astrophysics Data System (ADS)
Yang, X.; Koehl, M.; Grussenmeyer, P.; Macher, H.
2016-06-01
In this paper, we discuss the potential of integrating both semantically rich models from Building Information Modelling (BIM) and Geographical Information Systems (GIS) to build the detailed 3D historic model. BIM contributes to the creation of a digital representation having all physical and functional building characteristics in several dimensions, as e.g. XYZ (3D), time and non-architectural information that are necessary for construction and management of buildings. GIS has potential in handling and managing spatial data especially exploring spatial relationships and is widely used in urban modelling. However, when considering heritage modelling, the specificity of irregular historical components makes it problematic to create the enriched model according to its complex architectural elements obtained from point clouds. Therefore, some open issues limiting the historic building 3D modelling will be discussed in this paper: how to deal with the complex elements composing historic buildings in BIM and GIS environment, how to build the enriched historic model, and why to construct different levels of details? By solving these problems, conceptualization, documentation and analysis of enriched Historic Building Information Modelling are developed and compared to traditional 3D models aimed primarily for visualization.
Information complementarity: A new paradigm for decoding quantum incompatibility.
Zhu, Huangjun
2015-01-01
The existence of observables that are incompatible or not jointly measurable is a characteristic feature of quantum mechanics, which lies at the root of a number of nonclassical phenomena, such as uncertainty relations, wave--particle dual behavior, Bell-inequality violation, and contextuality. However, no intuitive criterion is available for determining the compatibility of even two (generalized) observables, despite the overarching importance of this problem and intensive efforts of many researchers. Here we introduce an information theoretic paradigm together with an intuitive geometric picture for decoding incompatible observables, starting from two simple ideas: Every observable can only provide limited information and information is monotonic under data processing. By virtue of quantum estimation theory, we introduce a family of universal criteria for detecting incompatible observables and a natural measure of incompatibility, which are applicable to arbitrary number of arbitrary observables. Based on this framework, we derive a family of universal measurement uncertainty relations, provide a simple information theoretic explanation of quantitative wave--particle duality, and offer new perspectives for understanding Bell nonlocality, contextuality, and quantum precision limit. PMID:26392075
Information complementarity: A new paradigm for decoding quantum incompatibility
Zhu, Huangjun
2015-01-01
The existence of observables that are incompatible or not jointly measurable is a characteristic feature of quantum mechanics, which lies at the root of a number of nonclassical phenomena, such as uncertainty relations, wave—particle dual behavior, Bell-inequality violation, and contextuality. However, no intuitive criterion is available for determining the compatibility of even two (generalized) observables, despite the overarching importance of this problem and intensive efforts of many researchers. Here we introduce an information theoretic paradigm together with an intuitive geometric picture for decoding incompatible observables, starting from two simple ideas: Every observable can only provide limited information and information is monotonic under data processing. By virtue of quantum estimation theory, we introduce a family of universal criteria for detecting incompatible observables and a natural measure of incompatibility, which are applicable to arbitrary number of arbitrary observables. Based on this framework, we derive a family of universal measurement uncertainty relations, provide a simple information theoretic explanation of quantitative wave—particle duality, and offer new perspectives for understanding Bell nonlocality, contextuality, and quantum precision limit. PMID:26392075
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
Sparse linear programming subprogram
Hanson, R.J.; Hiebert, K.L.
1981-12-01
This report describes a subprogram, SPLP(), for solving linear programming problems. The package of subprogram units comprising SPLP() is written in Fortran 77. The subprogram SPLP() is intended for problems involving at most a few thousand constraints and variables. The subprograms are written to take advantage of sparsity in the constraint matrix. A very general problem statement is accepted by SPLP(). It allows upper, lower, or no bounds on the variables. Both the primal and dual solutions are returned as output parameters. The package has many optional features. Among them is the ability to save partial results and then use them to continue the computation at a later time.
Peralta, Guadalupe; Frost, Carol M; Rand, Tatyana A; Didham, Raphael K; Tylianakis, Jason M
2014-07-01
Complementary resource use and redundancy of species that fulfill the same ecological role are two mechanisms that can respectively increase and stabilize process rates in ecosystems. For example, predator complementarity and redundancy can determine prey consumption rates and their stability, yet few studies take into account the multiple predator species attacking multiple prey at different rates in natural communities. Thus, it remains unclear whether these biodiversity mechanisms are important determinants of consumption in entire predator-prey assemblages, such that food-web interaction structure determines community-wide consumption and stability. Here, we use empirical quantitative food webs to study the community-wide effects of functional complementarity and redundancy of consumers (parasitoids) on herbivore control in temperate forests. We find that complementarity in host resource use by parasitoids was a strong predictor of absolute parasitism rates at the community level and that redundancy in host-use patterns stabilized community-wide parasitism rates in space, but not through time. These effects can potentially explain previous contradictory results from predator diversity research. Phylogenetic diversity (measured using taxonomic distance) did not explain functional complementarity or parasitism rates, so could not serve as a surrogate measure for functional complementarity. Our study shows that known mechanisms underpinning predator diversity effects on both functioning and stability can easily be extended to link food webs to ecosystem functioning.
Temporal and spatial complementarity of the wind and the solar resources in the Iberian Peninsula
NASA Astrophysics Data System (ADS)
Jerez, Sonia; Trigo, Ricardo M.; Sarsa, Antonio; Lorente-PLazas, Raquel; Pozo-Vázquez, David; Montávez, Juan Pedro
2013-04-01
Both Iberian countries (Portugal and Spain) are investing considerably in new wind and solar power plants to achieve a sustainable future, both in environmental and economic terms. Resource evaluation, aimed at optimizing the power generation according to the energy demand, is a mandatory requisite for the success of such a large amount of investments. However, this aim is difficult to attain due to the lack of lengthy and reliable observational datasets, implying poor spatial coverage. Hence, here we rely on a hindcast simulation spanning the period 1959-2007 and covering the whole Iberian Peninsula with resolution of 10 km, to retrieve the primary meteorological variables from which estimations of wind and solar power are done. Based on that, we have investigated the temporal (at the monthly timescale) and spatial complementarity of the wind and the solar resources in the Iberian Peninsula. The annual cycle of energy demand in Iberia shows two maxima centered in winter and summer and relatively smaller loads during the transitional seasons, with both the shape and the monthly values of this cycle having experienced small changes in the recent years. Since the annual cycle of wind (solar) power presents a clear maximum in winter (summer), it is immediate to infer that both cycles could be combined in order to achieve the shape required by the annual cycle of energy demand. Interannually, both resources show large variability in the winter months. Nevertheless, our results indicate that the monthly series of wind and solar power are strongly anticorrelated during winter and thus, both series could be also combined in order to achieve minimum interannual variability in the resulting wind-plus-solar production output. Moreover we found that this interannual complementarity is related, at least partially, to the multiple influence of the three main large-scale modes of climatic variability affecting Europe (NAO, EA and SCAND) since while their positive phases enhance
Non-linear aeroelastic prediction for aircraft applications
NASA Astrophysics Data System (ADS)
de C. Henshaw, M. J.; Badcock, K. J.; Vio, G. A.; Allen, C. B.; Chamberlain, J.; Kaynes, I.; Dimitriadis, G.; Cooper, J. E.; Woodgate, M. A.; Rampurawala, A. M.; Jones, D.; Fenwick, C.; Gaitonde, A. L.; Taylor, N. V.; Amor, D. S.; Eccles, T. A.; Denley, C. J.
2007-05-01
Current industrial practice for the prediction and analysis of flutter relies heavily on linear methods and this has led to overly conservative design and envelope restrictions for aircraft. Although the methods have served the industry well, it is clear that for a number of reasons the inclusion of non-linearity in the mathematical and computational aeroelastic prediction tools is highly desirable. The increase in available and affordable computational resources, together with major advances in algorithms, mean that non-linear aeroelastic tools are now viable within the aircraft design and qualification environment. The Partnership for Unsteady Methods in Aerodynamics (PUMA) Defence and Aerospace Research Partnership (DARP) was sponsored in 2002 to conduct research into non-linear aeroelastic prediction methods and an academic, industry, and government consortium collaborated to address the following objectives: To develop useable methodologies to model and predict non-linear aeroelastic behaviour of complete aircraft. To evaluate the methodologies on real aircraft problems. To investigate the effect of non-linearities on aeroelastic behaviour and to determine which have the greatest effect on the flutter qualification process. These aims have been very effectively met during the course of the programme and the research outputs include: New methods available to industry for use in the flutter prediction process, together with the appropriate coaching of industry engineers. Interesting results in both linear and non-linear aeroelastics, with comprehensive comparison of methods and approaches for challenging problems. Additional embryonic techniques that, with further research, will further improve aeroelastics capability. This paper describes the methods that have been developed and how they are deployable within the industrial environment. We present a thorough review of the PUMA aeroelastics programme together with a comprehensive review of the relevant research
Superconducting linear actuator
NASA Technical Reports Server (NTRS)
Johnson, Bruce; Hockney, Richard
1993-01-01
Special actuators are needed to control the orientation of large structures in space-based precision pointing systems. Electromagnetic actuators that presently exist are too large in size and their bandwidth is too low. Hydraulic fluid actuation also presents problems for many space-based applications. Hydraulic oil can escape in space and contaminate the environment around the spacecraft. A research study was performed that selected an electrically-powered linear actuator that can be used to control the orientation of a large pointed structure. This research surveyed available products, analyzed the capabilities of conventional linear actuators, and designed a first-cut candidate superconducting linear actuator. The study first examined theoretical capabilities of electrical actuators and determined their problems with respect to the application and then determined if any presently available actuators or any modifications to available actuator designs would meet the required performance. The best actuator was then selected based on available design, modified design, or new design for this application. The last task was to proceed with a conceptual design. No commercially-available linear actuator or modification capable of meeting the specifications was found. A conventional moving-coil dc linear actuator would meet the specification, but the back-iron for this actuator would weigh approximately 12,000 lbs. A superconducting field coil, however, eliminates the need for back iron, resulting in an actuator weight of approximately 1000 lbs.
Fargione, Joseph; Tilman, David; Dybzinski, Ray; Lambers, Janneke Hille Ris; Clark, Chris; Harpole, W. Stanley; Knops, Johannes M.H; Reich, Peter B; Loreau, Michel
2007-01-01
In a 10-year (1996–2005) biodiversity experiment, the mechanisms underlying the increasingly positive effect of biodiversity on plant biomass production shifted from sampling to complementarity over time. The effect of diversity on plant biomass was associated primarily with the accumulation of higher total plant nitrogen pools (N g m−2) and secondarily with more efficient N use at higher diversity. The accumulation of N in living plant biomass was significantly increased by the presence of legumes, C4 grasses, and their combined presence. Thus, these results provide clear evidence for the increasing effects of complementarity through time and suggest a mechanism whereby diversity increases complementarity through the increased input and retention of N, a commonly limiting nutrient. PMID:17251113
Fargione, Joseph; Tilman, David; Dybzinski, Ray; Lambers, Janneke Hille Ris; Clark, Chris; Harpole, W Stanley; Knops, Johannes M H; Reich, Peter B; Loreau, Michel
2007-03-22
In a 10-year (1996-2005) biodiversity experiment, the mechanisms underlying the increasingly positive effect of biodiversity on plant biomass production shifted from sampling to complementarity over time. The effect of diversity on plant biomass was associated primarily with the accumulation of higher total plant nitrogen pools (N g m-2) and secondarily with more efficient N use at higher diversity. The accumulation of N in living plant biomass was significantly increased by the presence of legumes, C4 grasses, and their combined presence. Thus, these results provide clear evidence for the increasing effects of complementarity through time and suggest a mechanism whereby diversity increases complementarity through the increased input and retention of N, a commonly limiting nutrient.
Linear Algebraic Method for Non-Linear Map Analysis
Yu,L.; Nash, B.
2009-05-04
We present a newly developed method to analyze some non-linear dynamics problems such as the Henon map using a matrix analysis method from linear algebra. Choosing the Henon map as an example, we analyze the spectral structure, the tune-amplitude dependence, the variation of tune and amplitude during the particle motion, etc., using the method of Jordan decomposition which is widely used in conventional linear algebra.
NASA Astrophysics Data System (ADS)
Ramirez Camargo, L.; Zink, R.; Dorner, W.
2015-07-01
Spatial assessments of the potential of renewable energy sources (RES) have become a valuable information basis for policy and decision-making. These studies, however, do not explicitly consider the variability in time of RES such as solar energy or wind. Until now, the focus is usually given to economic profitability based on yearly balances, which do not allow a comprehensive examination of RES-technologies complementarity. Incrementing temporal resolution of energy output estimation will permit to plan the aggregation of a diverse pool of RES plants i.e., to conceive a system as a virtual power plant (VPP). This paper presents a spatiotemporal analysis methodology to estimate RES potential of municipalities. The methodology relies on a combination of open source geographic information systems (GIS) processing tools and the in-memory array processing environment of Python and NumPy. Beyond the typical identification of suitable locations to build power plants, it is possible to define which of them are the best for a balanced local energy supply. A case study of a municipality, using spatial data with one square meter resolution and one hour temporal resolution, shows strong complementarity of photovoltaic and wind power. Furthermore, it is shown that a detailed deployment strategy of potential suitable locations for RES, calculated with modest computational requirements, can support municipalities to develop VPPs and improve security of supply.
Emergence of complementarity and the Baconian roots of Niels Bohr's method
NASA Astrophysics Data System (ADS)
Perovic, Slobodan
2013-08-01
I argue that instead of a rather narrow focus on N. Bohr's account of complementarity as a particular and perhaps obscure metaphysical or epistemological concept (or as being motivated by such a concept), we should consider it to result from pursuing a particular method of studying physical phenomena. More precisely, I identify a strong undercurrent of Baconian method of induction in Bohr's work that likely emerged during his experimental training and practice. When its development is analyzed in light of Baconian induction, complementarity emerges as a levelheaded rather than a controversial account, carefully elicited from a comprehensive grasp of the available experimental basis, shunning hasty metaphysically motivated generalizations based on partial experimental evidence. In fact, Bohr's insistence on the "classical" nature of observations in experiments, as well as the counterintuitive synthesis of wave and particle concepts that have puzzled scholars, seem a natural outcome (an updated instance) of the inductive method. Such analysis clarifies the intricacies of early Schrödinger's critique of the account as well as Bohr's response, which have been misinterpreted in the literature. If adequate, the analysis may lend considerable support to the view that Bacon explicated the general terms of an experimentally minded strand of the scientific method, developed and refined by scientists in the following three centuries.
What is complementarity?: Niels Bohr and the architecture of quantum theory
NASA Astrophysics Data System (ADS)
Plotnitsky, Arkady
2014-12-01
This article explores Bohr’s argument, advanced under the heading of ‘complementarity,’ concerning quantum phenomena and quantum mechanics, and its physical and philosophical implications. In Bohr, the term complementarity designates both a particular concept and an overall interpretation of quantum phenomena and quantum mechanics, in part grounded in this concept. While the argument of this article is primarily philosophical, it will also address, historically, the development and transformations of Bohr’s thinking, under the impact of the development of quantum theory and Bohr’s confrontation with Einstein, especially their exchange concerning the EPR experiment, proposed by Einstein, Podolsky and Rosen in 1935. Bohr’s interpretation was progressively characterized by a more radical epistemology, in its ultimate form, which was developed in the 1930s and with which I shall be especially concerned here, defined by his new concepts of phenomenon and atomicity. According to this epistemology, quantum objects are seen as indescribable and possibly even as inconceivable, and as manifesting their existence only in the effects of their interactions with measuring instruments upon those instruments, effects that define phenomena in Bohr’s sense. The absence of causality is an automatic consequence of this epistemology. I shall also consider how probability and statistics work under these epistemological conditions.
Principles of miRNA-mRNA interactions: beyond sequence complementarity.
Afonso-Grunz, Fabian; Müller, Sören
2015-08-01
MicroRNAs (miRNAs) are small non-coding RNAs that post-transcriptionally regulate gene expression by altering the translation efficiency and/or stability of targeted mRNAs. In vertebrates, more than 50% of all protein-coding RNAs are assumed to be subject to miRNA-mediated control, but current high-throughput methods that reliably measure miRNA-mRNA interactions either require prior knowledge of target mRNAs or elaborate preparation procedures. Consequently, experimentally validated interactions are relatively rare. Furthermore, in silico prediction based on sequence complementarity of miRNAs and their corresponding target sites suffers from extremely high false positive rates. Apparently, sequence complementarity alone is often insufficient to reflect the complex post-transcriptional regulation of mRNAs by miRNAs, which is especially true for animals. Therefore, combined analysis of small non-coding and protein-coding RNAs is indispensable to better understand and predict the complex dynamics of miRNA-regulated gene expression. Single-nucleotide polymorphisms (SNPs) and alternative polyadenylation (APA) can affect miRNA binding of a given transcript from different individuals and tissues, and especially APA is currently emerging as a major factor that contributes to variations in miRNA-mRNA interplay in animals. In this review, we focus on the influence of APA and SNPs on miRNA-mediated gene regulation and discuss the computational approaches that take these mechanisms into account.
[The Principle of Genome Complementarity in the Enhancement of Plant Adaptive Capacities].
Tikhonovich, I A; Andronov, E E; Borisov, A Yu; Dolgikh, E A; Zhernakov, A I; Zhukov, V A; Provorov, N A; Roumiantseva, M L; Simarov, B V
2015-09-01
In the present work, the potential for the enhancement of the adaptive capacity of microbe-plant systems (MPSs) through the integration of the symbiosis partners' genomes is considered on the example of different types of symbiotic relationships. The accumulated data on the genetic control of interactions for both the plant and microbe, which are discussed in the paper with respect to signaling genes, suggest that it is the complementarity of genetic determinants that underlies the successful formation of MPSs. A eukaryotic genome with limited information content, which is stable throughout a generation, is complemented by a virtually unlimited prokaryotic metagenome. The microsymbiont's ability to adapt to different living conditions is based on the restructuring of the accessory genome by different mechanisms, which are likely to be activated under the influence of plants, although the details of such a regulation remain unknown. Features of the genetic control of the interaction, particularly its universal character for different symbionts, allow us to formulate a principle of genome-complementarity with respect to interacting organisms and consider it an important factor, an adaptation that enhances the abilities of M PSs for their sustainable development in natural ecosystems and for high plant productivity in agrocenoses. PMID:26606794
Wave-particle dualism and complementarity unraveled by a different mode
Menzel, Ralf; Puhlmann, Dirk; Heuer, Axel; Schleich, Wolfgang P.
2012-01-01
The precise knowledge of one of two complementary experimental outcomes prevents us from obtaining complete information about the other one. This formulation of Niels Bohr’s principle of complementarity when applied to the paradigm of wave-particle dualism—that is, to Young’s double-slit experiment—implies that the information about the slit through which a quantum particle has passed erases interference. In the present paper we report a double-slit experiment using two photons created by spontaneous parametric down-conversion where we observe interference in the signal photon despite the fact that we have located it in one of the slits due to its entanglement with the idler photon. This surprising aspect of complementarity comes to light by our special choice of the TEM01 pump mode. According to quantum field theory the signal photon is then in a coherent superposition of two distinct wave vectors giving rise to interference fringes analogous to two mechanical slits. PMID:22628561
Chavarria, Delia; Ramos-Serrano, Andrea; Hirao, Ichiro; Berdis, Anthony J.
2011-01-01
O6-methylguanine is a miscoding DNA lesion arising from the alkylation of guanine. This report uses the bacteriophage T4 DNA polymerase as a model to probe the roles hydrogen-bonding interactions, shape/size, and nucleobase desolvation during the replication of this miscoding lesion. This was accomplished by using transient kinetic techniques to monitor the kinetic parameters for incorporating and extending natural and non-natural nucleotides. In general, the efficiency of nucleotide incorporation does not depend on the hydrogen-bonding potential of the incoming nucleotide. Instead, nucleobase hydrophobicity and shape complementarity appear to be the preeminent factors controlling nucleotide incorporation. In addition, shape complementarity plays a large role in controlling the extension of various mispairs containing O6-methylguanine. This is evident as the rate constants for extension correlate with proper interglycosyl distances and symmetry between the base angles of the formed mispair. Base pairs not conforming to an acceptable geometry within the polymerase’s active site are refractory to elongation and are processed via exonuclease proofreading. The collective data set encompassing nucleotide incorporation, extension, and excision is used to generate a model accounting for the mutagenic potential of O6-methylguanine observed in vivo. In addition, kinetic studies monitoring the incorporation and extension of non-natural nucleotides identified an analog that displays high selectivity for incorporation opposite O6-methylguanine compared to unmodified purines. The unusual selectivity of this analog for replicating damaged DNA provides a novel biochemical tool to study translesion DNA synthesis. PMID:21819995
Norris, Vic; Root-Bernstein, Robert
2009-06-04
In the "ecosystems-first" approach to the origins of life, networks of non-covalent assemblies of molecules (composomes), rather than individual protocells, evolved under the constraints of molecular complementarity. Composomes evolved into the hyperstructures of modern bacteria. We extend the ecosystems-first approach to explain the origin of eukaryotic cells through the integration of mixed populations of bacteria. We suggest that mutualism and symbiosis resulted in cellular mergers entailing the loss of redundant hyperstructures, the uncoupling of transcription and translation, and the emergence of introns and multiple chromosomes. Molecular complementarity also facilitated integration of bacterial hyperstructures to perform cytoskeletal and movement functions.
Lombaert, Eric; Guillemaud, Thomas; Lundgren, Jonathan; Koch, Robert; Facon, Benoît; Grez, Audrey; Loomans, Antoon; Malausa, Thibaut; Nedved, Oldrich; Rhule, Emma; Staverlokk, Arnstein; Steenberg, Tove; Estoup, Arnaud
2014-12-01
Inferences about introduction histories of invasive species remain challenging because of the stochastic demographic processes involved. Approximate Bayesian computation (ABC) can help to overcome these problems, but such method requires a prior understanding of population structure over the study area, necessitating the use of alternative methods and an intense sampling design. In this study, we made inferences about the worldwide invasion history of the ladybird Harmonia axyridis by various population genetics statistical methods, using a large set of sampling sites distributed over most of the species' native and invaded areas. We evaluated the complementarity of the statistical methods and the consequences of using different sets of site samples for ABC inferences. We found that the H. axyridis invasion has involved two bridgehead invasive populations in North America, which have served as the source populations for at least six independent introductions into other continents. We also identified several situations of genetic admixture between differentiated sources. Our results highlight the importance of coupling ABC methods with more traditional statistical approaches. We found that the choice of site samples could affect the conclusions of ABC analyses comparing possible scenarios. Approaches involving independent ABC analyses on several sample sets constitute a sensible solution, complementary to standard quality controls based on the analysis of pseudo-observed data sets, to minimize erroneous conclusions. This study provides biologists without expertise in this area with detailed methodological and conceptual guidelines for making inferences about invasion routes when dealing with a large number of sampling sites and complex population genetic structures. PMID:25369988
Linear Programming Problems for Generalized Uncertainty
ERIC Educational Resources Information Center
Thipwiwatpotjana, Phantipa
2010-01-01
Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…
Linear Programming across the Curriculum
ERIC Educational Resources Information Center
Yoder, S. Elizabeth; Kurz, M. Elizabeth
2015-01-01
Linear programming (LP) is taught in different departments across college campuses with engineering and management curricula. Modeling an LP problem is taught in every linear programming class. As faculty teaching in Engineering and Management departments, the depth to which teachers should expect students to master this particular type of…
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Colgate, S.A.
1958-05-27
An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.
Designing linear systolic arrays
Kumar, V.K.P.; Tsai, Y.C. . Dept. of Electrical Engineering)
1989-12-01
The authors develop a simple mapping technique to design linear systolic arrays. The basic idea of the technique is to map the computations of a certain class of two-dimensional systolic arrays onto one-dimensional arrays. Using this technique, systolic algorithms are derived for problems such as matrix multiplication and transitive closure on linearly connected arrays of PEs with constant I/O bandwidth. Compared to known designs in the literature, the technique leads to modular systolic arrays with constant hardware in each PE, few control lines, lexicographic data input/output, and improved delay time. The unidirectional flow of control and data in this design assures implementation of the linear array in the known fault models of wafer scale integration.
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Context image for PIA03667 Linear Clouds
These clouds are located near the edge of the south polar region. The cloud tops are the puffy white features in the bottom half of the image.
Image information: VIS instrument. Latitude -80.1N, Longitude 52.1E. 17 meter/pixel resolution.
Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.
NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.
NASA Astrophysics Data System (ADS)
Liolios, K.; Georgiev, I.; Liolios, A.
2012-10-01
A numerical approach for a problem arising in Civil and Environmental Engineering is presented. This problem concerns the dynamic soil-pipeline interaction, when unilateral contact conditions due to tensionless and elastoplastic softening/fracturing behaviour of the soil as well as due to gapping caused by earthquake excitations are taken into account. Moreover, soil-capacity degradation due to environmental effects are taken into account. The mathematical formulation of this dynamic elastoplasticity problem leads to a system of partial differential equations with equality domain and inequality boundary conditions. The proposed numerical approach is based on a double discretization, in space and time, and on mathematical programming methods. First, in space the finite element method (FEM) is used for the simulation of the pipeline and the unilateral contact interface, in combination with the boundary element method (BEM) for the soil simulation. Concepts of the non-convex analysis are used. Next, with the aid of Laplace transform, the equality problem conditions are transformed to convolutional ones involving as unknowns the unilateral quantities only. So the number of unknowns is significantly reduced. Then a marching-time approach is applied and a non-convex linear complementarity problem is solved in each time-step.
ERIC Educational Resources Information Center
Scupola, Ada
1999-01-01
Discussion of the publishing industry and its use of information and communication technologies focuses on the way in which electronic-commerce technologies are changing and could change the publishing processes, and develops a business complementarity model of electronic publishing to maximize profitability and improve the competitive position.…
ERIC Educational Resources Information Center
Stroup, Walter M.; Wilensky, Uri
2014-01-01
Placed in the larger context of broadening the engagement with systems dynamics and complexity theory in school-aged learning and teaching, this paper is intended to introduce, situate, and illustrate--with results from the use of network supported participatory simulations in classrooms--a stance we call "embedded complementarity" as an…
Technology Transfer Automated Retrieval System (TEKTRAN)
Complementary resource use and redundancy of species that fulfil the same ecological role are two mechanisms that can increase and stabilize process rates in ecosystems. For example, predator complementarity and redundancy can determine prey consumption rates, in some cases providing invaluable cont...
Klohnen, Eva C; Luo, Shanhong
2003-10-01
Little is known about whether personality characteristics influence initial attraction. Because adult attachment differences influence a broad range of relationship processes, the authors examined their role in 3 experimental attraction studies. The authors tested four major attraction hypotheses--self similarity, ideal-self similarity, complementarity, and attachment security--and examined both actual and perceptual factors. Replicated analyses across samples, designs, and manipulations showed that actual security and self similarity predicted attraction. With regard to perceptual factors, ideal similarity, self similarity, and security all were significant predictors. Whereas perceptual ideal and self similarity had incremental predictive power, perceptual security's effects were subsumed by perceptual ideal similarity. Perceptual self similarity fully mediated actual attachment similarity effects, whereas ideal similarity was only a partial mediator. PMID:14561124
Complementarity of weak lensing and peculiar velocity measurements in testing general relativity
Song, Yong-Seon; Zhao Gongbo; Bacon, David; Koyama, Kazuya; Nichol, Robert C.; Pogosian, Levon
2011-10-15
We explore the complementarity of weak lensing and galaxy peculiar velocity measurements to better constrain modifications to General Relativity. We find no evidence for deviations from General Relativity on cosmological scales from a combination of peculiar velocity measurements (for Luminous Red Galaxies in the Sloan Digital Sky Survey) with weak lensing measurements (from the Canadian France Hawaii Telescope Legacy Survey). We provide a Fisher error forecast for a Euclid-like space-based survey including both lensing and peculiar velocity measurements and show that the expected constraints on modified gravity will be at least an order of magnitude better than with present data, i.e. we will obtain {approx_equal}5% errors on the modified gravity parametrization described here. We also present a model-independent method for constraining modified gravity parameters using tomographic peculiar velocity information, and apply this methodology to the present data set.
Morin, Xavier; Fahse, Lorenz; Scherer-Lorenzen, Michael; Bugmann, Harald
2011-12-01
Understanding the link between biodiversity and ecosystem functioning (BEF) is pivotal in the context of global biodiversity loss. Yet, long-term effects have been explored only weakly, especially for forests, and no clear evidence has been found regarding the underlying mechanisms. We explore the long-term relationship between diversity and productivity using a forest succession model. Extensive simulations show that tree species richness promotes productivity in European temperate forests across a large climatic gradient, mostly through strong complementarity between species. We show that this biodiversity effect emerges because increasing species richness promotes higher diversity in shade tolerance and growth ability, which results in forests responding faster to small-scale mortality events. Our study generalises results from short-term experiments in grasslands to forest ecosystems and demonstrates that competition for light alone induces a positive effect of biodiversity on productivity, thus providing a new angle for explaining BEF relationships. PMID:21955682
Antibody Complementarity-Determining Regions (CDRs): A Bridge between Adaptive and Innate Immunity
Cenci, Elio; Ortelli, Federica; Magliani, Walter; Ciociola, Tecla; Bistoni, Francesco; Conti, Stefania; Vecchiarelli, Anna; Polonelli, Luciano
2009-01-01
Background It has been documented that, independently from the specificity of the native antibody (Ab) for a given antigen (Ag), complementarity determining regions (CDR)-related peptides may display differential antimicrobial, antiviral and antitumor activities. Methodology/Principal Findings In this study we demonstrate that a synthetic peptide with sequence identical to VHCDR3 of a mouse monoclonal Ab (mAb) specific for difucosyl human blood group A is easily taken up by macrophages with subsequent stimulation of: i) proinflammatory cytokine production; ii) PI3K-Akt pathway and iii) TLR-4 expression. Significantly, VHCDR3 exerts therapeutic effect against systemic candidiasis without possessing direct candidacidal properties. Conclusions/Significance These results open a new scenario about the possibility that, beyond the half life of immunoglobulins, Ab fragments may effectively influence the antiinfective cellular immune response in a way reminiscent of regulatory peptides of innate immunity. PMID:19997599
Studies of a human lambda-chain epitope related to a complementarity-determining region.
Kim, H S; Deutsch, H F
1988-01-01
A tryptic non-adecapeptide representing the 24-42 sequence of the MCG lambda-type Bence-Jones protein, and which contains its entire complementarity-determining region-1, was isolated. The peptide was utilized in preparing an affinity column that was used to isolate an antibody having the reactivity of a previously employed idiotypic antibody to MCG. This antibody preparation, as well as 13 monoclonal mouse antibodies to human lambda-chains, was employed in an enzyme-linked immunoassay to detect other Bence-Jones proteins with this serologic specificity. The results obtained with two of the monoclonal antibodies suggest that the epitope in question is a noncontiguous one. PMID:2452787
NASA Astrophysics Data System (ADS)
Francois, Baptiste; Borga, Marco; Creutin, Jean-Dominique; Hingray, Benoit; Raynaud, Damien; Sauterleute, Julian-Friedrich
2015-04-01
Climate related energy sources such as wind, solar and runoff sources are variable in time and space, following their driving weather variables. High penetration of such energy sources into the energy network might be facilitated by using the complementarity of different energy sources. Complementary resources lead to reduce the balance between the energy load and production from the energy mix. This study presents the analysis of the effect of a 100 % renewable energy mix composed by solar and run-of-the-river energy in three administrative units in Northern Italy. These two energy sources are the main ones in this area. Solar power is generated in the flat Veneto plains and run-of-the-river power is generated at the two opposite location of a climate transect going from the Alpine crest (snow melt dominated area) to the Veneto plains (rainfall dominated area). The manageability of each energy source is first discussed thanks to the analysis of the standardized auto-correlation of the energy balance obtain using each energy source alone. Covering the all possible energy mix among these energy sources, we then analyze their complementarity across different time scale using two different indicators. The first one is the standard deviation of the energy balance. The second one is the theoretical storage required for balancing. Results show that at small time scale (hourly), a high share of run-of-the-river power allows minimizing the energy balance variability. The opposite is obtained at larger time scale (daily and monthly) essentially because of lower variability of solar power generation at those time scale, which also implies a lower storage requirement.
Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel
ERIC Educational Resources Information Center
El-Gebeily, M.; Yushau, B.
2008-01-01
In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…
Stability of Linear Equations--Algebraic Approach
ERIC Educational Resources Information Center
Cherif, Chokri; Goldstein, Avraham; Prado, Lucio M. G.
2012-01-01
This article could be of interest to teachers of applied mathematics as well as to people who are interested in applications of linear algebra. We give a comprehensive study of linear systems from an application point of view. Specifically, we give an overview of linear systems and problems that can occur with the computed solution when the…
Mechanism of translation based on intersubunit complementarities of ribosomal RNAs and tRNAs.
Nagano, Kozo; Nagano, Nozomi
2007-04-21
A universal rule is found about nucleotide sequence complementarities between the regions 2653-2666 in the GTPase-binding site of 23S rRNA and 1064-1077 of 16S rRNA as well as between the region 1103-1107 of 16S rRNA and GUUCG (or GUUCA) of tRNAs. This rule holds for all species in the living kingdoms except for two protista mitochondrial rRNAs of Trypanosoma brucei and Plasmodium falciparum. We found that quite similar relationships for the two species hold under the assumption presented in the present paper. The complementarity between T-loop of tRNA and the region 1103-1107 of 16S rRNA suggests that the first interaction of a ribosome with aminoacyl-tRNAEF-TuGTP ternary complex or EF-GGDP complex could occur at the region 1103-1107 of 16S rRNA with the T-loop-D-loop contact region of the ternary complex or the domain IV-V bridge region of the EF-GGDP complex. The second interaction should occur between the A-site codon and the anticodon loop or between the anticodon stem/loop of A-site tRNA and the tip of domain IV of EF-G. The above stepwise interactions would facilitate the collision of the region 1064-1077 of 16S rRNA with the region around A2660 at the alpha-sarcin/ricin loop of 23S rRNA. In this way, the universal rule is capable of explaining how spectinomycin-binding region of 16S rRNA takes part in translocation, how GTPases such as EF-Tu and EF-G can be introduced into their binding site on the large subunit ribosome in proper orientation efficiently and also how driving forces for tRNA movement are produced in translocation and codon recognition. The analysis of T-loops of all tRNAs also presents an evolutionary trend from a random and seemingly primitive sequence, as defined to be Y type, to the most developed structure, such as either 5G7 or 5A7 types in the present definition.
NASA Astrophysics Data System (ADS)
Liang, Yeong-Cherng; Spekkens, Robert W.; Wiseman, Howard M.
2011-09-01
In 1960, the mathematician Ernst Specker described a simple example of nonclassical correlations, the counter-intuitive features of which he dramatized using a parable about a seer, who sets an impossible prediction task to his daughter’s suitors. We revisit this example here, using it as an entrée to three central concepts in quantum foundations: contextuality, Bell-nonlocality, and complementarity. Specifically, we show that Specker’s parable offers a narrative thread that weaves together a large number of results, including the following: the impossibility of measurement-noncontextual and outcome-deterministic ontological models of quantum theory (the 1967 Kochen-Specker theorem), in particular, the recent state-specific pentagram proof of Klyachko; the impossibility of Bell-local models of quantum theory (Bell’s theorem), especially the proofs by Mermin and Hardy and extensions thereof; the impossibility of a preparation-noncontextual ontological model of quantum theory; the existence of triples of positive operator valued measures (POVMs) that can be measured jointly pairwise but not triplewise. Along the way, several novel results are presented: a generalization of a theorem by Fine connecting the existence of a joint distribution over outcomes of counterfactual measurements to the existence of a measurement-noncontextual and outcome-deterministic ontological model; a generalization of Klyachko’s proof of the Kochen-Specker theorem from pentagrams to a family of star polygons; a proof of the Kochen-Specker theorem in the style of Hardy’s proof of Bell’s theorem (i.e., one that makes use of the failure of the transitivity of implication for counterfactual statements); a categorization of contextual and Bell-nonlocal correlations in terms of frustrated networks; a derivation of a new inequality testing preparation noncontextuality; some novel results on the joint measurability of POVMs and the question of whether these can be modeled
NASA Astrophysics Data System (ADS)
Yamasaki, Tadashi; Houseman, Gregory; Hamling, Ian; Postek, Elek
2010-05-01
We have developed a new parallelized 3-D numerical code, OREGANO_VE, for the solution of the general visco-elastic problem in a rectangular block domain. The mechanical equilibrium equation is solved using the finite element method for a (non-)linear Maxwell visco-elastic rheology. Time-dependent displacement and/or traction boundary conditions can be applied. Matrix assembly is based on a tetrahedral element defined by 4 vertex nodes and 6 nodes located at the midpoints of the edges, and within which displacement is described by a quadratic interpolation function. For evaluating viscoelastic relaxation, an explicit time-stepping algorithm (Zienkiewicz and Cormeau, Int. J. Num. Meth. Eng., 8, 821-845, 1974) is employed. We test the accurate implementation of the OREGANO_VE by comparing numerical and analytic (or semi-analytic half-space) solutions to different problems in a range of applications: (1) equilibration of stress in a constant density layer after gravity is switched on at t = 0 tests the implementation of spatially variable viscosity and non-Newtonian viscosity; (2) displacement of the welded interface between two blocks of differing viscosity tests the implementation of viscosity discontinuities, (3) displacement of the upper surface of a layer under applied normal load tests the implementation of time-dependent surface tractions (4) visco-elastic response to dyke intrusion (compared with the solution in a half-space) tests the implementation of all aspects. In each case, the accuracy of the code is validated subject to use of a sufficiently small time step, providing assurance that the OREGANO_VE code can be applied to a range of visco-elastic relaxation processes in three dimensions, including post-seismic deformation and post-glacial uplift. The OREGANO_VE code includes a capability for representation of prescribed fault slip on an internal fault. The surface displacement associated with large earthquakes can be detected by some geodetic observations
Hlaing, Lwin Mar; Fahmida, Umi; Htet, Min Kyaw; Utomo, Budi; Firmansyah, Agus; Ferguson, Elaine L
2016-07-01
Poor feeding practices result in inadequate nutrient intakes in young children in developing countries. To improve practices, local food-based complementary feeding recommendations (CFR) are needed. This cross-sectional survey aimed to describe current food consumption patterns of 12-23-month-old Myanmar children (n 106) from Ayeyarwady region in order to identify nutrient requirements that are difficult to achieve using local foods and to formulate affordable and realistic CFR to improve dietary adequacy. Weekly food consumption patterns were assessed using a 12-h weighed dietary record, single 24-h recall and a 5-d food record. Food costs were estimated by market surveys. CFR were formulated by linear programming analysis using WHO Optifood software and evaluated among mothers (n 20) using trial of improved practices (TIP). Findings showed that Ca, Zn, niacin, folate and Fe were 'problem nutrients': nutrients that did not achieve 100 % recommended nutrient intake even when the diet was optimised. Chicken liver, anchovy and roselle leaves were locally available nutrient-dense foods that would fill these nutrient gaps. The final set of six CFR would ensure dietary adequacy for five of twelve nutrients at a minimal cost of 271 kyats/d (based on the exchange rate of 900 kyats/USD at the time of data collection: 3rd quarter of 2012), but inadequacies remained for niacin, folate, thiamin, Fe, Zn, Ca and vitamin B6. TIP showed that mothers believed liver and vegetables would cause worms and diarrhoea, but these beliefs could be overcome to successfully promote liver consumption. Therefore, an acceptable set of CFR were developed to improve the dietary practices of 12-23-month-old Myanmar children using locally available foods. Alternative interventions such as fortification, however, are still needed to ensure dietary adequacy of all nutrients.
Linearization algorithms for line transfer
Scott, H.A.
1990-11-06
Complete linearization is a very powerful technique for solving multi-line transfer problems that can be used efficiently with a variety of transfer formalisms. The linearization algorithm we describe is computationally very similar to ETLA, but allows an effective treatment of strongly-interacting lines. This algorithm has been implemented (in several codes) with two different transfer formalisms in all three one-dimensional geometries. We also describe a variation of the algorithm that handles saturable laser transport. Finally, we present a combination of linearization with a local approximate operator formalism, which has been implemented in two dimensions and is being developed in three dimensions. 11 refs.
Küppers, B
1992-05-01
Two developments have led to the elaboration of a complementarity theory in modern natural science--the Copenhagen interpretation of quantum mechanics and Viktor von Weizsäckers introduction of the subject into biology. The revolutionary implications of these developments are discernible in modern literature. The author seeks to show that a paradigm shift in positivistic clinical medicine is necessitated by the history and theory of science.
NASA Astrophysics Data System (ADS)
Borga, Marco; Baptiste, François; Zoccatelli, Davide
2016-04-01
High penetration of climate related energy sources (such as solar and small hydropower) might be facilitated by using their complementarity in order to increase the balance between energy load and generation. In this study we examine and map the complementarity between solar PV and run-of-the-river energy along the river network of catchments in the Eastern Italian Alps which are significantly affected by glaciers. We analyze energy sources complementarity across different temporal scales using two indicators: the standard deviation of the energy balance and the theoretical storage required for balancing generation and load (François et a., 2016). Temporal scales ranging from hours to years are assessed. By using a glacio-hydrological model able to simulate both the glacier and hydrology dynamics, we analyse the sensitivity of the obtained results with respect to different scenarios of glacier retreat. Reference: François, B., Hingray, B., Raynaud, D., Borga, M., Creutin, J.D., 2016: Increasing climate-related-energy penetration by integrating run-of-the river hydropower to wind/solar mix. Renewable Energy, 87, 686-696.
Roh, Jooho; Byun, Sung June; Seo, Youngsil; KIm, Minjae; Lee, Jae-Ho; Kim, Songmi; Lee, Yuno; Lee, Keun Woo; Kim, Jin-Kyoo; Kwon, Myung-Hee
2015-02-01
In contrast to a number of studies on the humanization of non-human antibodies, the reshaping of a non-human antibody into a chicken antibody has never been attempted. Therefore, nothing is known about the animal species-dependent compatibility of the framework regions (FRs) that sustain the appropriate conformation of the complementarity-determining regions (CDRs). In this study, we attempted the reshaping of the variable domains of the mouse catalytic anti-nucleic acid antibody 3D8 (m3D8) into the FRs of a chicken antibody (“chickenization”) by CDR grafting, which is a common method for the humanization of antibodies. CDRs of the acceptor chicken antibody that showed a high homology to the FRs of m3D8 were replaced with those of m3D8, resulting in the chickenized antibody (ck3D8). ck3D8 retained the biochemical properties (DNA binding, DNA hydrolysis, and cellular internalizing activities) and three-dimensional structure of m3D8 and showed reduced immunogenicity in chickens. Our study demonstrates that CDR grafting can be applied to the chickenization of a mouse antibody, probably due to the interspecies compatibility of the FRs.
NASA Astrophysics Data System (ADS)
Bosyk, G. M.; Portesi, M.; Holik, F.; Plastino, A.
2013-06-01
We revisit the connection between the complementarity and uncertainty principles of quantum mechanics within the framework of Mach-Zehnder interferometry. We focus our attention on the trade-off relation between complementary path information and fringe visibility. This relation is equivalent to the uncertainty relation of Schrödinger and Robertson for a suitably chosen pair of observables. We show that it is equivalent as well to the uncertainty inequality provided by Landau and Pollak. We also study the relationship of this trade-off relation with a family of entropic uncertainty relations based on Rényi entropies. There is no equivalence in this case, but the different values of the entropic parameter do define regimes that provides us with a tool to discriminate between non-trivial states of minimum uncertainty. The existence of such regimes agrees with previous results of Luis (2011 Phys. Rev. A 84 034101), although their meaning was not sufficiently clear. We discuss the origin of these regimes with the intention of gaining a deeper understanding of entropic measures.
López-Madrigal, Sergio; Beltrà, Aleixandre; Resurrección, Serena; Soto, Antonia; Latorre, Amparo; Moya, Andrés; Gil, Rosario
2014-01-01
Intracellular bacterial supply of essential amino acids is common among sap-feeding insects, thus complementing the scarcity of nitrogenous compounds in plant phloem. This is also the role of the two mealybug endosymbiotic systems whose genomes have been sequenced. In the nested endosymbiotic system from Planococcus citri (Pseudococcinae), “Candidatus Tremblaya princeps” and “Candidatus Moranella endobia” cooperate to synthesize essential amino acids, while in Phenacoccus avenae (Phenacoccinae) this function is performed by its single endosymbiont “Candidatus Tremblaya phenacola.” However, little is known regarding the evolution of essential amino acid supplementation strategies in other mealybug systems. To address this knowledge gap, we screened for the presence of six selected loci involved in essential amino acid biosynthesis in five additional mealybug species. We found evidence of ongoing complementarity among endosymbionts from insects of subfamily Pseudococcinae, as well as horizontal gene transfer affecting endosymbionts from insects of family Phenacoccinae, providing a more comprehensive picture of the evolutionary history of these endosymbiotic systems. Additionally, we report two diagnostic motifs to help identify invasive mealybug species. PMID:25206351
Volatile fractionation in the early solar system and chondrule/matrix complementarity
Bland, Philip A.; Alard, Olivier; Benedix, Gretchen K.; Kearsley, Anton T.; Menzies, Olwyn N.; Watt, Lauren E.; Rogers, Nick W.
2005-01-01
Bulk chondritic meteorites and terrestrial planets show a monotonic depletion in moderately volatile and volatile elements relative to the Sun's photosphere and CI carbonaceous chondrites. Although volatile depletion was the most fundamental chemical process affecting the inner solar nebula, debate continues as to its cause. Carbonaceous chondrites are the most primitive rocks available to us, and fine-grained, volatile-rich matrix is the most primitive component in these rocks. Several volatile depletion models posit a pristine matrix, with uniform CI-like chemistry across the different chondrite groups. To understand the nature of volatile fractionation, we studied minor and trace element abundances in fine-grained matrices of a variety of carbonaceous chondrites. We find that matrix trace element abundances are characteristic for a given chondrite group; they are depleted relative to CI chondrites, but are enriched relative to bulk compositions of their parent meteorites, particularly in volatile siderophile and chalcophile elements. This enrichment produces a highly nonmonotonic trace element pattern that requires a complementary depletion in chondrule compositions to achieve a monotonic bulk. We infer that carbonaceous chondrite matrices are not pristine: they formed from a material reservoir that was already depleted in volatile and moderately volatile elements. Additional thermal processing occurred during chondrule formation, with exchange of volatile siderophile and chalcophile elements between chondrules and matrix. This chemical complementarity shows that these chondritic components formed in the same nebula region. PMID:16174733
López-Madrigal, Sergio; Beltrà, Aleixandre; Resurrección, Serena; Soto, Antonia; Latorre, Amparo; Moya, Andrés; Gil, Rosario
2014-01-01
Intracellular bacterial supply of essential amino acids is common among sap-feeding insects, thus complementing the scarcity of nitrogenous compounds in plant phloem. This is also the role of the two mealybug endosymbiotic systems whose genomes have been sequenced. In the nested endosymbiotic system from Planococcus citri (Pseudococcinae), "Candidatus Tremblaya princeps" and "Candidatus Moranella endobia" cooperate to synthesize essential amino acids, while in Phenacoccus avenae (Phenacoccinae) this function is performed by its single endosymbiont "Candidatus Tremblaya phenacola." However, little is known regarding the evolution of essential amino acid supplementation strategies in other mealybug systems. To address this knowledge gap, we screened for the presence of six selected loci involved in essential amino acid biosynthesis in five additional mealybug species. We found evidence of ongoing complementarity among endosymbionts from insects of subfamily Pseudococcinae, as well as horizontal gene transfer affecting endosymbionts from insects of family Phenacoccinae, providing a more comprehensive picture of the evolutionary history of these endosymbiotic systems. Additionally, we report two diagnostic motifs to help identify invasive mealybug species.
PHYSICS OF PREDETERMINED EVENTS: Complementarity States of Choice-Chance Mechanics
NASA Astrophysics Data System (ADS)
Morales, Manuel
2011-04-01
We find that the deterministic application of choice-chance mechanics, as applied in the Tempt Destiny experiment, is also reflected in the construct of the double-slit experiment and that the complementary results obtained by this treatment mirror that of Niels Bohr's principle of complementarity as well as reveal Einstein's hidden variables. Whereas the double-slit experiment serves to reveal the deterministic and indeterministic behavioral characteristics of our physical world, the Tempt Destiny experiment serves to reveal the deterministic and indeterministic behavioral characteristics of our actions. The unifying factor shared by both experiments is that they are of the same construct yielding similar results from the same energy. Given that, we seek to establish if the fundamental states of energy, i.e, certainty and probability, are indeed predetermined. Over the span of ten years, the Tempt Destiny experimental model of pairing choice and chance events has statistically obtained consistent results of absolute value. The evidence clearly infers that the fundamental mechanics of energy is a complement of two mutually exclusive mechanisms that bring into being - as opposed to revealing - the predetermined state of an event as either certain or probable, although not both simultaneously.
Maher, M Cyrus; Hernandez, Ryan D
2015-04-01
Ortholog detection (OD) is a lynchpin of most statistical methods in comparative genomics. This task involves accurately identifying genes across species that descend from a common ancestral sequence. OD methods comprise a wide variety of approaches, each with their own benefits and costs under a variety of evolutionary and practical scenarios. In this article, we examine the proteomes of ten mammals by using four methodologically distinct, rigorously filtered OD methods. In head-to-head comparisons, we find that these algorithms significantly outperform one another for 38-45% of the genes analyzed. We leverage this high complementarity through the development MOSAIC, or Multiple Orthologous Sequence Analysis and Integration by Cluster optimization, the first tool for integrating methodologically diverse OD methods. Relative to the four methods examined, MOSAIC more than quintuples the number of alignments for which all species are present while simultaneously maintaining or improving functional-, phylogenetic-, and sequence identity-based measures of ortholog quality. Further, this improvement in alignment quality yields more confidently aligned sites and higher levels of overall conservation, while simultaneously detecting of up to 180% more positively selected sites. We close by highlighting a MOSAIC-specific positively selected sites near the active site of TPSAB1, an enzyme linked to asthma, heart disease, and irritable bowel disease. MOSAIC alignments, source code, and full documentation are available at http://pythonhosted.org/bio-MOSAIC.
Low and high energy phenomenology of quark-lepton complementarity scenarios
Hochmuth, Kathrin A.; Rodejohann, Werner
2007-04-01
We conduct a detailed analysis of the phenomenology of two predictive seesaw scenarios leading to quark-lepton complementarity. In both cases we discuss the neutrino mixing observables and their correlations, neutrinoless double beta decay and lepton flavor violating decays such as {mu}{yields}e{gamma}. We also comment on leptogenesis. The first scenario is disfavored on the level of one to two standard deviations, in particular, due to its prediction for |U{sub e3}|. There can be resonant leptogenesis with quasidegenerate heavy and light neutrinos, which would imply sizable cancellations in neutrinoless double beta decay. The decays {mu}{yields}e{gamma} and {tau}{yields}{mu}{gamma} are typically observable unless the SUSY masses approach the TeV scale. In the second scenario leptogenesis is impossible. It is, however, in perfect agreement with all oscillation data. The prediction for {mu}{yields}e{gamma} is in general too large, unless the SUSY masses are in the range of several TeV. In this case {tau}{yields}e{gamma} and {tau}{yields}{mu}{gamma} are unobservable.
The Space Infrared Interferometric Telescope (SPIRIT) and its Complementarity to ALMA
NASA Technical Reports Server (NTRS)
Leisawitz, Dave
2007-01-01
We report results of a pre-Formulation Phase study of SPIRIT, a candidate NASA Origins Probe mission. SPIRIT is a spatial and spectral interferometer with an operating wavelength range 25 - 400 microns. SPIRIT will provide sub-arcsecond resolution images and spectra with resolution R = 3000 in a 1 arcmin field of view to accomplish three primary scientific objectives: (1) Learn how planetary systems form from protostellar disks, and how they acquire their chemical organization; (2) Characterize the family of extrasolar planetary systems by imaging the structure in debris disks to understand how and where planets of different types form; and (3) Learn how high-redshift galaxies formed and merged to form the present-day population of galaxies. In each of these science domains, SPIRIT will yield information complementary to that obtainable with the James Webb Space Telescope (JWST)and the Atacama Large Millimeter Array (ALMA), and all three observatories could operate contemporaneously. Here we shall emphasize the SPIRIT science goals (1) and (2) and the mission's complementarity with ALMA.
Complementarity with neutron two-path interferences and separated-oscillatory-field resonances
NASA Astrophysics Data System (ADS)
Ramsey, Norman F.
1993-07-01
The implications of complementarity on two-path neutron interferences and on separated-oscillatory-field resonances are discussed. The studies are extensions of those by Furry and Ramsey [Phys. Rev. 118, 623 (1960)] on two-path electron interferences which showed that an apparatus used to determine the electron path introduces uncertainties in the scalar and vector potentials which in turn disturb the phase of the electron wave function so much through the Aharonov-Bohm effects [Phys. Rev. 115, 485 (1959)] that the interference fringes disappear. A similar result is derived here for the neutron, but with the phase uncertainties coming from the magnetic moment's motion through an electric field as discussed by Anandan [Phys. Rev. Lett. 48, 1660 (1982)], and Aharonov and Casher [Phys. Rev. Lett. 53, 319 (1984)]. A corresponding result is also obtained for separated-oscillatory-fields resonances, which can be interpreted as an interference between two different paths in spin space. An interesting difference between the separated-path and separated-oscillatory-field experiments is that the latter may be interpreted classically.
Quark-lepton complementarity predictions for θ 23 pmns and CP violation
NASA Astrophysics Data System (ADS)
Sharma, Gazal; Chauhan, B. C.
2016-07-01
In the light of recent experimental results on θ 13 pmns , we re-investigate the complementarity between the quark and lepton mixing matrices and obtain predictions for most unsettled neutrino mixing parameters like θ 23 pmns and CP violating phase invariants J, S 1 and S 2. This paper is motivated by our previous work where in a QLC model we predicted the value for θ 13 pmns = (9 - 2 + 1 ) °, which was found to be in strong agreement with the experimental results. In the QLC model the non-trivial correlation between CKM and PMNS mixing matrices is given by a correlation matrix ( V c ). We do numerical simulation and estimate the texture of the V c and in our findings we get a small deviation from the Tri-Bi-Maximal (TBM) texture and a large from the Bi-Maximal one, which is consistent with the work already reported in literature. In the further investigation we obtain quite constrained limits for sin2 θ 23 pmns = 0. 4235 - 0.0043 + 0.0032 that is narrower to the existing ones. We also obtain the constrained limits for the three CP violating phase invariants J , S 1 and S 2:as J < 0 .0315, S 1 < 0 .12 and S 2 < 0 .08, respectively.
Zhu, Dan H.; Wang, Ping; Zhang, Wei Z.; Yuan, Yue; Li, Bin; Wang, Jiang
2015-01-01
Background Although plant diversity is postulated to resist invasion, studies have not provided consistent results, most of which were ascribed to the influences of other covariate environmental factors. Methodology/Principal Findings To explore the mechanisms by which plant diversity influences community invasibility, an experiment was conducted involving grassland sites varying in their species richness (one, two, four, eight, and sixteen species). Light interception efficiency and soil resources (total N, total P, and water content) were measured. The number of species, biomass, and the number of seedlings of the invading species decreased significantly with species richness. The presence of Patrinia scabiosaefolia Fisch. ex Trev. and Mosla dianthera (Buch.-Ham. ex Roxburgh) Maxim. significantly increased the resistance of the communities to invasion. A structural equation model showed that the richness of planted species had no direct and significant effect on invasion. Light interception efficiency had a negative effect on the invasion whereas soil water content had a positive effect. In monocultures, Antenoron filiforme (Thunb.) Rob. et Vaut. showed the highest light interception efficiency and P. scabiosaefolia recorded the lowest soil water content. With increased planted-species richness, a greater percentage of pots showed light use efficiency higher than that of A. filiforme and a lower soil water content than that in P. scabiosaefolia. Conclusions/Significance The results of this study suggest that plant diversity confers resistance to invasion, which is mainly ascribed to the sampling effect of particular species and the complementarity effect among species on resources use. PMID:26556713
Linearized Kernel Dictionary Learning
NASA Astrophysics Data System (ADS)
Golts, Alona; Elad, Michael
2016-06-01
In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.
Carroll, Linda J.; Rothe, J. Peter
2010-01-01
Like other areas of health research, there has been increasing use of qualitative methods to study public health problems such as injuries and injury prevention. Likewise, the integration of qualitative and quantitative research (mixed-methods) is beginning to assume a more prominent role in public health studies. Likewise, using mixed-methods has great potential for gaining a broad and comprehensive understanding of injuries and their prevention. However, qualitative and quantitative research methods are based on two inherently different paradigms, and their integration requires a conceptual framework that permits the unity of these two methods. We present a theory-driven framework for viewing qualitative and quantitative research, which enables us to integrate them in a conceptually sound and useful manner. This framework has its foundation within the philosophical concept of complementarity, as espoused in the physical and social sciences, and draws on Bergson’s metaphysical work on the ‘ways of knowing’. Through understanding how data are constructed and reconstructed, and the different levels of meaning that can be ascribed to qualitative and quantitative findings, we can use a mixed-methods approach to gain a conceptually sound, holistic knowledge about injury phenomena that will enhance our development of relevant and successful interventions. PMID:20948937
Carroll, Linda J; Rothe, J Peter
2010-09-01
Like other areas of health research, there has been increasing use of qualitative methods to study public health problems such as injuries and injury prevention. Likewise, the integration of qualitative and quantitative research (mixed-methods) is beginning to assume a more prominent role in public health studies. Likewise, using mixed-methods has great potential for gaining a broad and comprehensive understanding of injuries and their prevention. However, qualitative and quantitative research methods are based on two inherently different paradigms, and their integration requires a conceptual framework that permits the unity of these two methods. We present a theory-driven framework for viewing qualitative and quantitative research, which enables us to integrate them in a conceptually sound and useful manner. This framework has its foundation within the philosophical concept of complementarity, as espoused in the physical and social sciences, and draws on Bergson's metaphysical work on the 'ways of knowing'. Through understanding how data are constructed and reconstructed, and the different levels of meaning that can be ascribed to qualitative and quantitative findings, we can use a mixed-methods approach to gain a conceptually sound, holistic knowledge about injury phenomena that will enhance our development of relevant and successful interventions.
Design of Linear Quadratic Regulators and Kalman Filters
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Geyser, L.
1986-01-01
AESOP solves problems associated with design of controls and state estimators for linear time-invariant systems. Systems considered are modeled in state-variable form by set of linear differential and algebraic equations with constant coefficients. Two key problems solved by AESOP are linear quadratic regulator (LQR) design problem and steady-state Kalman filter design problem. AESOP is interactive. User solves design problems and analyzes solutions in single interactive session. Both numerical and graphical information available to user during the session.
Greene, Stephanie L.; Kisha, Theodore J.; Yu, Long-Xi; Parra-Quijano, Mauricio
2014-01-01
A standard conservation strategy for plant genetic resources integrates in situ (on-farm or wild) and ex situ (gene or field bank) approaches. Gene bank managers collect ex situ accessions that represent a comprehensive snap shot of the genetic diversity of in situ populations at a given time and place. Although simple in theory, achieving complementary in situ and ex situ holdings is challenging. Using Trifolium thompsonii as a model insect-pollinated herbaceous perennial species, we used AFLP markers to compare genetic diversity and structure of ex situ accessions collected at two time periods (1995, 2004) from four locations, with their corresponding in situ populations sampled in 2009. Our goal was to assess the complementarity of the two approaches. We examined how gene flow, selection and genetic drift contributed to population change. Across locations, we found no difference in diversity between ex situ and in situ samples. One population showed a decline in genetic diversity over the 15 years studied. Population genetic differentiation among the four locations was significant, but weak. Association tests suggested infrequent, long distance gene flow. Selection and drift occurred, but differences due to spatial effects were three times as strong as differences attributed to temporal effects, and suggested recollection efforts could occur at intervals greater than fifteen years. An effective collecting strategy for insect pollinated herbaceous perennial species was to sample >150 plants, equalize maternal contribution, and sample along random transects with sufficient space between plants to minimize intrafamilial sampling. Quantifying genetic change between ex situ and in situ accessions allows genetic resource managers to validate ex situ collecting and maintenance protocols, develop appropriate recollection intervals, and provide an early detection mechanism for identifying problematic conditions that can be addressed to prevent further decline in
Tait, Leigh W; Hawes, Ian; Schiel, David R
2014-01-01
Phototrophs underpin most ecosystem processes, but to do this they need sufficient light. This critical resource, however, is compromised along many marine shores by increased loads of sediments and nutrients from degraded inland habitats. Increased attenuation of total irradiance within coastal water columns due to turbidity is known to reduce species' depth limits and affect the taxonomic structure and architecture of algal-dominated assemblages, but virtually no attention has been paid to the potential for changes in spectral quality of light energy to impact production dynamics. Pioneering studies over 70 years ago showed how different pigmentation of red, green and brown algae affected absorption spectra, action spectra, and photosynthetic efficiency across the PAR (photosynthetically active radiation) spectrum. Little of this, however, has found its way into ecological syntheses of the impacts of optically active contaminants on coastal macroalgal communities. Here we test the ability of macroalgal assemblages composed of multiple functional groups (including representatives from the chlorophyta, rhodophyta and phaeophyta) to use the total light resource, including different light wavelengths and examine the effects of suspended sediments on the penetration and spectral quality of light in coastal waters. We show that assemblages composed of multiple functional groups are better able to use light throughout the PAR spectrum. Macroalgal assemblages with four sub-canopy species were between 50-75% more productive than assemblages with only one or two sub-canopy species. Furthermore, attenuation of the PAR spectrum showed both a loss of quanta and a shift in spectral distribution with depth across coastal waters of different clarity, with consequences to productivity dynamics of diverse layered assemblages. The processes of light complementarity may help provide a mechanistic understanding of how altered turbidity affects macroalgal assemblages in coastal waters
Tait, Leigh W; Hawes, Ian; Schiel, David R
2014-01-01
Phototrophs underpin most ecosystem processes, but to do this they need sufficient light. This critical resource, however, is compromised along many marine shores by increased loads of sediments and nutrients from degraded inland habitats. Increased attenuation of total irradiance within coastal water columns due to turbidity is known to reduce species' depth limits and affect the taxonomic structure and architecture of algal-dominated assemblages, but virtually no attention has been paid to the potential for changes in spectral quality of light energy to impact production dynamics. Pioneering studies over 70 years ago showed how different pigmentation of red, green and brown algae affected absorption spectra, action spectra, and photosynthetic efficiency across the PAR (photosynthetically active radiation) spectrum. Little of this, however, has found its way into ecological syntheses of the impacts of optically active contaminants on coastal macroalgal communities. Here we test the ability of macroalgal assemblages composed of multiple functional groups (including representatives from the chlorophyta, rhodophyta and phaeophyta) to use the total light resource, including different light wavelengths and examine the effects of suspended sediments on the penetration and spectral quality of light in coastal waters. We show that assemblages composed of multiple functional groups are better able to use light throughout the PAR spectrum. Macroalgal assemblages with four sub-canopy species were between 50-75% more productive than assemblages with only one or two sub-canopy species. Furthermore, attenuation of the PAR spectrum showed both a loss of quanta and a shift in spectral distribution with depth across coastal waters of different clarity, with consequences to productivity dynamics of diverse layered assemblages. The processes of light complementarity may help provide a mechanistic understanding of how altered turbidity affects macroalgal assemblages in coastal waters
Complementarity of ResourceSat-1 AWiFS and Landsat TM/ETM+ sensors
Goward, S.N.; Chander, G.; Pagnutti, M.; Marx, A.; Ryan, R.; Thomas, N.; Tetrault, R.
2012-01-01
Considerable interest has been given to forming an international collaboration to develop a virtual moderate spatial resolution land observation constellation through aggregation of data sets from comparable national observatories such as the US Landsat, the Indian ResourceSat and related systems. This study explores the complementarity of India's ResourceSat-1 Advanced Wide Field Sensor (AWiFS) with the Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+). The analysis focuses on the comparative radiometry, geometry, and spectral properties of the two sensors. Two applied assessments of these data are also explored to examine the strengths and limitations of these alternate sources of moderate resolution land imagery with specific application domains. There are significant technical differences in these imaging systems including spectral band response, pixel dimensions, swath width, and radiometric resolution which produce differences in observation data sets. None of these differences was found to strongly limit comparable analyses in agricultural and forestry applications. Overall, we found that the AWiFS and Landsat TM/ETM+ imagery are comparable and in some ways complementary, particularly with respect to temporal repeat frequency. We have found that there are limits to our understanding of the AWiFS performance, for example, multi-camera design and stability of radiometric calibration over time, that leave some uncertainty that has been better addressed for Landsat through the Image Assessment System and related cross-sensor calibration studies. Such work still needs to be undertaken for AWiFS and similar observatories that may play roles in the Global Earth Observation System of Systems Land Surface Imaging Constellation.
Probing the Complementarity of FAIMS and Strong Cation Exchange Chromatography in Shotgun Proteomics
NASA Astrophysics Data System (ADS)
Creese, Andrew J.; Shimwell, Neil J.; Larkins, Katherine P. B.; Heath, John K.; Cooper, Helen J.
2013-03-01
High field asymmetric waveform ion mobility spectrometry (FAIMS), also known as differential ion mobility spectrometry, coupled with liquid chromatography tandem mass spectrometry (LC-MS/MS) offers benefits for the analysis of complex proteomics samples. Advantages include increased dynamic range, increased signal-to-noise, and reduced interference from ions of similar m/ z. FAIMS also separates isomers and positional variants. An alternative, and more established, method of reducing sample complexity is prefractionation by use of strong cation exchange chromatography. Here, we have compared SCX-LC-MS/MS with LC-FAIMS-MS/MS for the identification of peptides and proteins from whole cell lysates from the breast carcinoma SUM52 cell line. Two FAIMS approaches are considered: (1) multiple compensation voltages within a single LC-MS/MS analysis (internal stepping) and (2) repeat LC-MS/MS analyses at different and fixed compensation voltages (external stepping). We also consider the consequence of the fragmentation method (electron transfer dissociation or collision-induced dissociation) on the workflow performance. The external stepping approach resulted in a greater number of protein and peptide identifications than the internal stepping approach for both ETD and CID MS/MS, suggesting that this should be the method of choice for FAIMS proteomics experiments. The overlap in protein identifications from the SCX method and the external FAIMS method was ~25 % for both ETD and CID, and for peptides was less than 20 %. The lack of overlap between FAIMS and SCX highlights the complementarity of the two techniques. Charge state analysis of the peptide assignments showed that the FAIMS approach identified a much greater proportion of triply-charged ions.
Shining Light on Benthic Macroalgae: Mechanisms of Complementarity in Layered Macroalgal Assemblages
Tait, Leigh W.; Hawes, Ian; Schiel, David R.
2014-01-01
Phototrophs underpin most ecosystem processes, but to do this they need sufficient light. This critical resource, however, is compromised along many marine shores by increased loads of sediments and nutrients from degraded inland habitats. Increased attenuation of total irradiance within coastal water columns due to turbidity is known to reduce species' depth limits and affect the taxonomic structure and architecture of algal-dominated assemblages, but virtually no attention has been paid to the potential for changes in spectral quality of light energy to impact production dynamics. Pioneering studies over 70 years ago showed how different pigmentation of red, green and brown algae affected absorption spectra, action spectra, and photosynthetic efficiency across the PAR (photosynthetically active radiation) spectrum. Little of this, however, has found its way into ecological syntheses of the impacts of optically active contaminants on coastal macroalgal communities. Here we test the ability of macroalgal assemblages composed of multiple functional groups (including representatives from the chlorophyta, rhodophyta and phaeophyta) to use the total light resource, including different light wavelengths and examine the effects of suspended sediments on the penetration and spectral quality of light in coastal waters. We show that assemblages composed of multiple functional groups are better able to use light throughout the PAR spectrum. Macroalgal assemblages with four sub-canopy species were between 50–75% more productive than assemblages with only one or two sub-canopy species. Furthermore, attenuation of the PAR spectrum showed both a loss of quanta and a shift in spectral distribution with depth across coastal waters of different clarity, with consequences to productivity dynamics of diverse layered assemblages. The processes of light complementarity may help provide a mechanistic understanding of how altered turbidity affects macroalgal assemblages in coastal
Portfolio optimization using fuzzy linear programming
NASA Astrophysics Data System (ADS)
Pandit, Purnima K.
2013-09-01
Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.
NASA Astrophysics Data System (ADS)
Birx, Daniel
1992-03-01
Among the family of particle accelerators, the Induction Linear Accelerator is the best suited for the acceleration of high current electron beams. Because the electromagnetic radiation used to accelerate the electron beam is not stored in the cavities but is supplied by transmission lines during the beam pulse it is possible to utilize very low Q (typically<10) structures and very large beam pipes. This combination increases the beam breakup limited maximum currents to of order kiloamperes. The micropulse lengths of these machines are measured in 10's of nanoseconds and duty factors as high as 10-4 have been achieved. Until recently the major problem with these machines has been associated with the pulse power drive. Beam currents of kiloamperes and accelerating potentials of megavolts require peak power drives of gigawatts since no energy is stored in the structure. The marriage of liner accelerator technology and nonlinear magnetic compressors has produced some unique capabilities. It now appears possible to produce electron beams with average currents measured in amperes, peak currents in kiloamperes and gradients exceeding 1 MeV/meter, with power efficiencies approaching 50%. The nonlinear magnetic compression technology has replaced the spark gap drivers used on earlier accelerators with state-of-the-art all-solid-state SCR commutated compression chains. The reliability of these machines is now approaching 1010 shot MTBF. In the following paper we will briefly review the historical development of induction linear accelerators and then discuss the design considerations.
Arrenberg, Sebastian; et al.,
2013-10-31
In this Report we discuss the four complementary searches for the identity of dark matter: direct detection experiments that look for dark matter interacting in the lab, indirect detection experiments that connect lab signals to dark matter in our own and other galaxies, collider experiments that elucidate the particle properties of dark matter, and astrophysical probes sensitive to non-gravitational interactions of dark matter. The complementarity among the different dark matter searches is discussed qualitatively and illustrated quantitatively in several theoretical scenarios. Our primary conclusion is that the diversity of possible dark matter candidates requires a balanced program based on all four of those approaches.
Polonelli, Luciano; Pontón, José; Elguezabal, Natalia; Moragues, María Dolores; Casoli, Claudio; Pilotti, Elisabetta; Ronzi, Paola; Dobroff, Andrey S.; Rodrigues, Elaine G.; Juliano, Maria A.; Maffei, Domenico Leonardo; Magliani, Walter; Conti, Stefania; Travassos, Luiz R.
2008-01-01
Background Complementarity-determining regions (CDRs) are immunoglobulin (Ig) hypervariable domains that determine specific antibody (Ab) binding. We have shown that synthetic CDR-related peptides and many decapeptides spanning the variable region of a recombinant yeast killer toxin-like antiidiotypic Ab are candidacidal in vitro. An alanine-substituted decapeptide from the variable region of this Ab displayed increased cytotoxicity in vitro and/or therapeutic effects in vivo against various bacteria, fungi, protozoa and viruses. The possibility that isolated CDRs, represented by short synthetic peptides, may display antimicrobial, antiviral and antitumor activities irrespective of Ab specificity for a given antigen is addressed here. Methodology/Principal Findings CDR-based synthetic peptides of murine and human monoclonal Abs directed to: a) a protein epitope of Candida albicans cell wall stress mannoprotein; b) a synthetic peptide containing well-characterized B-cell and T-cell epitopes; c) a carbohydrate blood group A substance, showed differential inhibitory activities in vitro, ex vivo and/or in vivo against C. albicans, HIV-1 and B16F10-Nex2 melanoma cells, conceivably involving different mechanisms of action. Antitumor activities involved peptide-induced caspase-dependent apoptosis. Engineered peptides, obtained by alanine substitution of Ig CDR sequences, and used as surrogates of natural point mutations, showed further differential increased/unaltered/decreased antimicrobial, antiviral and/or antitumor activities. The inhibitory effects observed were largely independent of the specificity of the native Ab and involved chiefly germline encoded CDR1 and CDR2 of light and heavy chains. Conclusions/Significance The high frequency of bioactive peptides based on CDRs suggests that Ig molecules are sources of an unlimited number of sequences potentially active against infectious agents and tumor cells. The easy production and low cost of small sized synthetic
Zhao, Shanrong; Lu, Jin
2010-01-01
Determination of framework regions (FRs) and complementarity determining regions (CDRs) in an antibody is essential for understanding the underlying biology as well as antibody engineering and optimization. However, there are no computational algorithms available to delimit an antibody sequence or a library of sequences into FRs and CDRs in a coherent and automatic fashion. Based upon the mapping relationships among mature antibody sequences and their corresponding germline gene segments, a novel computational algorithm has been developed for automatic determination of CDRs. Even though a human can make more than 10(12) different antibody molecules in its preimmune repertoire to fight off invading pathogens, these antibodies are generated from rearrangements of a very limited number of germline variable (V) gene, diversity (D) gene and joining (J) gene segments followed by somatic hypermutation. The framework regions FR1, FR2 and FR3 in mature antibodies are encoded by germline V gene segments, while FR4 is encoded by J gene segments. Since there are only a limited number of germline gene segments, these genes can be pre-delimited to generate a knowledge base of FRs and CDRs. Then for a given antibody sequence, the algorithm scans each pre-delimited gene in knowledge base, finds the best matching V and J segments, and accordingly, identifies the FRs and CDRs. The described algorithm is stringently tested using nearly 25,000 human antibody sequences from NCBI, and it is proven to be very robust. Over 99.7% of antibody sequences can be delimited computationally. Of those delimited sequences, only 0.28% of them have somatic insertions and deletions in FRs, and their corresponding delimited results need manual checking. Another feature of the algorithm is that it is CDR definition independent, and can be easily extended to other CDR definitions besides the most widely used Kabat, Chothia and IMGT definitions. In addition to delimitation of antibody sequences into FRs
Zabetakis, Dan; Anderson, George P.; Bayya, Nikhil; Goldman, Ellen R.
2013-01-01
Single domain antibodies (sdAbs) are the recombinantly-expressed variable domain from camelid (or shark) heavy chain only antibodies and provide rugged recognition elements. Many sdAbs possess excellent affinity and specificity; most refold and are able to bind antigen after thermal denaturation. The sdAb A3, specific for the toxin Staphylococcal enterotoxin B (SEB), shows both sub-nanomolar affinity for its cognate antigen (0.14 nM) and an unusually high melting point of 85°C. Understanding the source of sdAb A3’s high melting temperature could provide a route for engineering improved melting temperatures into other sdAbs. The goal of this work was to determine how much of sdAb A3’s stability is derived from its complementarity determining regions (CDRs) versus its framework. Towards answering this question we constructed a series of CDR swap mutants in which the CDRs from unrelated sdAbs were integrated into A3’s framework and where A3’s CDRs were integrated into the framework of the other sdAbs. All three CDRs from A3 were moved to the frameworks of sdAb D1 (a ricin binder that melts at 50°C) and the anti-ricin sdAb C8 (melting point of 60°C). Similarly, the CDRs from sdAb D1 and sdAb C8 were moved to the sdAb A3 framework. In addition individual CDRs of sdAb A3 and sdAb D1 were swapped. Melting temperature and binding ability were assessed for each of the CDR-exchange mutants. This work showed that CDR2 plays a critical role in sdAb A3’s binding and stability. Overall, results from the CDR swaps indicate CDR interactions play a major role in the protein stability. PMID:24143255
The generalized pole assignment problem. [dynamic output feedback problems
NASA Technical Reports Server (NTRS)
Djaferis, T. E.; Mitter, S. K.
1979-01-01
Two dynamic output feedback problems for a linear, strictly proper system are considered, along with their interrelationships. The problems are formulated in the frequency domain and investigated in terms of linear equations over rings of polynomials. Necessary and sufficient conditions are expressed using genericity.
LRGS: Linear Regression by Gibbs Sampling
NASA Astrophysics Data System (ADS)
Mantz, Adam B.
2016-02-01
LRGS (Linear Regression by Gibbs Sampling) implements a Gibbs sampler to solve the problem of multivariate linear regression with uncertainties in all measured quantities and intrinsic scatter. LRGS extends an algorithm by Kelly (2007) that used Gibbs sampling for performing linear regression in fairly general cases in two ways: generalizing the procedure for multiple response variables, and modeling the prior distribution of covariates using a Dirichlet process.
ERIC Educational Resources Information Center
Demana, Franklin; Waits, Bert K.
1993-01-01
Discusses solutions to real-world linear particle-motion problems using graphing calculators to simulate the motion and traditional analytic methods of calculus. Applications include (1) changing circular or curvilinear motion into linear motion and (2) linear particle accelerators in physics. (MDH)
Systems of Linear Equations on a Spreadsheet.
ERIC Educational Resources Information Center
Bosch, William W.; Strickland, Jeff
1998-01-01
The Optimizer in Quattro Pro and the Solver in Excel software programs make solving linear and nonlinear optimization problems feasible for business mathematics students. Proposes ways in which the Optimizer or Solver can be coaxed into solving systems of linear equations. (ASK)
Linear pose estimation from points or lines
NASA Technical Reports Server (NTRS)
Ansar, A.; Daniilidis, K.
2002-01-01
We present a general framework which allows for a novel set of linear solutions to the pose estimation problem for both n points and n lines. We present a number of simulations which compare our results to two other recent linear algorithm as well as to iterative approaches.
Equating Scores from Adaptive to Linear Tests
ERIC Educational Resources Information Center
van der Linden, Wim J.
2006-01-01
Two local methods for observed-score equating are applied to the problem of equating an adaptive test to a linear test. In an empirical study, the methods were evaluated against a method based on the test characteristic function (TCF) of the linear test and traditional equipercentile equating applied to the ability estimates on the adaptive test…
Linear stability of directional solidification cells
Kessler, D.A. ); Levine, H. )
1990-03-15
We formulate the problem of finding the stability spectrum of the cellular pattern seen in directional solidification. This leads to a nonlinear eigenvalue problem for an integro-differential operator. We solve this problem numerically and compare our results to those obtained by linearizing the eigenvalue problem by employing the quasistatic approximation. Contrary to some recent claims, we find no evidence for a Hopf bifurcation to a dendritic pattern.
Analysis of a non-linear structure by considering two non-linear formulations
NASA Astrophysics Data System (ADS)
Majed, R.; Raynaud, J. L.
2003-03-01
In recent years, modal synthesis methods have been extended for solving non-linear dynamic problems subjected to harmonic excitation. These methods are based on the notion of non-linear or linearized modes and exploited in the case of structures affected by localized non-linearity. Actually, the experimental tests executed on non-linear structures are time consuming, particularly when repeated experimental tests are needed. It is often preferable to consider new non-linear methods with a view to decrease significantly the number of attempts during prototype tests and improving the accuracy of the dynamic behaviour. This article describes two fundamental non-linear formulations based on two different strategies. The first formulation exploits the eigensolutions of the associated linear system and the dynamics characteristics of each localized non-linearity. The second formulation is based on the exploitation of the linearized eigensolutions obtained using an iterative process. This article contains a numerical and an experimental study which examines the non-linear behaviour of the structure affected by localized non-linearities. The study is intended to validate the numerical algorithm and to evaluate the problems arising from the introduction of non-linearities. The complex responses are evaluated using the iterative Newton-Raphson method and for a series of discrete frequencies. The theory has been applied to a bi-dimensional structure and consists of evaluating the harmonic responses obtained using the proposed formulations by comparing measured and calculated transfer functions.
NASA Technical Reports Server (NTRS)
Weber, Arthur L.
1989-01-01
Glyceraldehyde-3-phosphate acts as the substrate in a model of early self-replication of a phosphodiester copolymer of glycerate-3-phosphate and glycerol-3-phosphate. This model of self-replication is based on covalent complementarity in which information transfer is mediated by a single covalent bond, in contrast to multiple weak interactions that establish complementarity in nucleic acid replication. This replication model is connected to contemporary biochemistry through its use of glyceraldehyde-3-phosphate, a central metabolite of glycolysis and photosynthesis.
Positive fractional linear electrical circuits
NASA Astrophysics Data System (ADS)
Kaczorek, Tadeusz
2013-10-01
The positive fractional linear systems and electrical circuits are addressed. New classes of fractional asymptotically stable and unstable electrical circuits are introduced. The Caputo and Riemann-Liouville definitions of fractional derivatives are used to analysis of the positive electrical circuits composed of resistors, capacitors, coils and voltage (current) sources. The positive fractional electrical and specially unstable different types electrical circuits are analyzed. Some open problems are formulated.
JUICE: complementarity of the payload in adressing the mission science objectives
NASA Astrophysics Data System (ADS)
Titov, Dmitri; Barabash, Stas; Bruzzone, Lorenzo; Dougherty, Michele; Erd, Christian; Fletcher, Leigh; Gare, Philippe; Gladstone, Randall; Grasset, Olivier; Gurvits, Leonid; Hartogh, Paul; Hussmann, Hauke; Iess, Luciano; Jaumann, Ralf; Langevin, Yves; Palumbo, Pasquale; Piccioni, Giuseppe; Wahlund, Jan-Erik
2014-05-01
radar sounder (RIME) for exploring the surface and subsurface of the moons, and a radio science experiment (3GM) to probe the atmospheres of Jupiter and its satellites and to perform measurements of the gravity fields. An in situ package comprises a powerful particle environment package (PEP), a magnetometer (J-MAG) and a radio and plasma wave instrument (RPWI), including electric fields sensors and a Langmuir probe. An experiment (PRIDE) using ground-based Very-Long-Baseline Interferometry (VLBI) will provide precise determination of the moons ephemerides. The instruments will work together to achieve mission science objectives that otherwise cannot be achieved by a single experiment. For instance, joint J-MAG, 3GM, GALA and JANUS observations would constrain thickness of the ice shell, ocean depth and conductivity. SWI, 3GM and UVS would complement each other in the temperature sounding of the Jupiter atmosphere. The complex coupling between magnetosphere and atmosphere of Jupiter will be jointly studied by combination of aurora imaging (UVS, MAJIS, JANUS) and plasma and fields measurements (J-MAG, RPWI, PEP). The talk will give an overview of the JUICE payload focusing on complementarity and synergy between the experiments.
The principles and construction of linear colliders
Rees, J.
1986-09-01
The problems posed to the designers and builders of high-energy linear colliders are discussed. Scaling laws of linear colliders are considered. The problem of attainment of small interaction areas is addressed. The physics of damping rings, which are designed to condense beam bunches in phase space, is discussed. The effect of wake fields on a particle bunch in a linac, particularly the conventional disk-loaded microwave linac structures, are discussed, as well as ways of dealing with those effects. Finally, the SLAC Linear Collider is described. 18 refs., 17 figs. (LEW)
Electrothermal linear actuator
NASA Technical Reports Server (NTRS)
Derr, L. J.; Tobias, R. A.
1969-01-01
Converting electric power into powerful linear thrust without generation of magnetic fields is accomplished with an electrothermal linear actuator. When treated by an energized filament, a stack of bimetallic washers expands and drives the end of the shaft upward.
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Generalized Linear Covariance Analysis
NASA Astrophysics Data System (ADS)
Markley, F. Landis; Carpenter, J. Russell
2009-01-01
This paper presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into "solve-for" and "consider" parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2008-01-01
We review and extend in two directions the results of prior work on generalized covariance analysis methods. This prior work allowed for partitioning of the state space into "solve-for" and "consider" parameters, allowed for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator s anchor time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
NASA Astrophysics Data System (ADS)
Theofilis, Vassilios
2011-01-01
This article reviews linear instability analysis of flows over or through complex two-dimensional (2D) and 3D geometries. In the three decades since it first appeared in the literature, global instability analysis, based on the solution of the multidimensional eigenvalue and/or initial value problem, is continuously broadening both in scope and in depth. To date it has dealt successfully with a wide range of applications arising in aerospace engineering, physiological flows, food processing, and nuclear-reactor safety. In recent years, nonmodal analysis has complemented the more traditional modal approach and increased knowledge of flow instability physics. Recent highlights delivered by the application of either modal or nonmodal global analysis are briefly discussed. A conscious effort is made to demystify both the tools currently utilized and the jargon employed to describe them, demonstrating the simplicity of the analysis. Hopefully this will provide new impulses for the creation of next-generation algorithms capable of coping with the main open research areas in which step-change progress can be expected by the application of the theory: instability analysis of fully inhomogeneous, 3D flows and control thereof.
Technology, Linear Equations, and Buying a Car.
ERIC Educational Resources Information Center
Sandefur, James T.
1992-01-01
Discusses the use of technology in solving compound interest-rate problems that can be modeled by linear relationships. Uses a graphing calculator to solve the specific problem of determining the amount of money that can be borrowed to buy a car for a given monthly payment and interest rate. (MDH)
Linear stochastic optimal control and estimation
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, F. K. B.
1976-01-01
Digital program has been written to solve the LSOCE problem by using a time-domain formulation. LSOCE problem is defined as that of designing controls for linear time-invariant system which is disturbed by white noise in such a way as to minimize quadratic performance index.
A Linear Algebraic Approach to Teaching Interpolation
ERIC Educational Resources Information Center
Tassa, Tamir
2007-01-01
A novel approach for teaching interpolation in the introductory course in numerical analysis is presented. The interpolation problem is viewed as a problem in linear algebra, whence the various forms of interpolating polynomial are seen as different choices of a basis to the subspace of polynomials of the corresponding degree. This approach…
Generalised Assignment Matrix Methodology in Linear Programming
ERIC Educational Resources Information Center
Jerome, Lawrence
2012-01-01
Discrete Mathematics instructors and students have long been struggling with various labelling and scanning algorithms for solving many important problems. This paper shows how to solve a wide variety of Discrete Mathematics and OR problems using assignment matrices and linear programming, specifically using Excel Solvers although the same…
An Intuitive Approach in Teaching Linear Programming in High School.
ERIC Educational Resources Information Center
Ulep, Soledad A.
1990-01-01
Discusses solving inequality problems involving linear programing. Describes the usual and alternative approaches. Presents an intuitive approach for finding a feasible solution by maximizing the objective function. (YP)
Linear elastic fracture mechanics primer
NASA Technical Reports Server (NTRS)
Wilson, Christopher D.
1992-01-01
This primer is intended to remove the blackbox perception of fracture mechanics computer software by structural engineers. The fundamental concepts of linear elastic fracture mechanics are presented with emphasis on the practical application of fracture mechanics to real problems. Numerous rules of thumb are provided. Recommended texts for additional reading, and a discussion of the significance of fracture mechanics in structural design are given. Griffith's criterion for crack extension, Irwin's elastic stress field near the crack tip, and the influence of small-scale plasticity are discussed. Common stress intensities factor solutions and methods for determining them are included. Fracture toughness and subcritical crack growth are discussed. The application of fracture mechanics to damage tolerance and fracture control is discussed. Several example problems and a practice set of problems are given.
Winiger, Christian B; Langenegger, Simon M; Khorev, Oleg
2014-01-01
Summary Aromatic π–π stacking interactions are ubiquitous in nature, medicinal chemistry and materials sciences. They play a crucial role in the stacking of nucleobases, thus stabilising the DNA double helix. The following paper describes a series of chimeric DNA–polycyclic aromatic hydrocarbon (PAH) hybrids. The PAH building blocks are electron-rich pyrene and electron-poor perylenediimide (PDI), and were incorporated into complementary DNA strands. The hybrids contain different numbers of pyrene–PDI interactions that were found to directly influence duplex stability. As the pyrene–PDI ratio approaches 1:1, the stability of the duplexes increases with an average value of 7.5 °C per pyrene–PDI supramolecular interaction indicating the importance of electrostatic complementarity for aromatic π–π stacking interactions. PMID:25161715
Kabat, E A; Wu, T T; Bilofsky, H
1976-02-01
From collected data on variable region sequences of heavy chains of immunoglobulins, the probability of random associations of any two amino-acid residues in the complementarity-determining segments was computed, and pairs of residues occurring significantly more frequently than expected were selected by computer. Significant associations between Phe 32 and Tyr 33, Phe 32 and Glu 35, and Tyr 33 and Glu 35 were found in six proteins, all of which were mouse myeloma proteins which bound phosphorylcholine (= phosphocholine). From the x-ray structure of McPC603, Tyr 33 and Glu 35 are contacting residues; a seventh phosphorylcholine-binding mouse myeloma protein also contained Phe 32 and Tyr 33 but position 35 had only been determined as Glx and thus this position had not been selected. Met 34 occurred in all seven phosphorylcholine-binding myeoma proteins but was also present at this position in 29 other proteins and thus was not selected; it is seen in the x-ray structure not to be a contacting residue. The role of Phe 32 is not obvious but it could have some conformational influence. A human phosphorylcholine-binding myeloma protien also had Phe, Tyr, and Met at positions 32, 33, and 34, but had Asp instead of Glu at position 35 and showed a lower binding constant. The ability to use sequence data to locate residues in complementarity-determing segments making contact with antigenic determinants and those playing essentially a structural role would contribute substantially to the understanding of antibody specificity. PMID:1061162
Kabat, E A; Wu, T T; Bilofsky, H
1976-02-01
From collected data on variable region sequences of heavy chains of immunoglobulins, the probability of random associations of any two amino-acid residues in the complementarity-determining segments was computed, and pairs of residues occurring significantly more frequently than expected were selected by computer. Significant associations between Phe 32 and Tyr 33, Phe 32 and Glu 35, and Tyr 33 and Glu 35 were found in six proteins, all of which were mouse myeloma proteins which bound phosphorylcholine (= phosphocholine). From the x-ray structure of McPC603, Tyr 33 and Glu 35 are contacting residues; a seventh phosphorylcholine-binding mouse myeloma protein also contained Phe 32 and Tyr 33 but position 35 had only been determined as Glx and thus this position had not been selected. Met 34 occurred in all seven phosphorylcholine-binding myeoma proteins but was also present at this position in 29 other proteins and thus was not selected; it is seen in the x-ray structure not to be a contacting residue. The role of Phe 32 is not obvious but it could have some conformational influence. A human phosphorylcholine-binding myeloma protien also had Phe, Tyr, and Met at positions 32, 33, and 34, but had Asp instead of Glu at position 35 and showed a lower binding constant. The ability to use sequence data to locate residues in complementarity-determing segments making contact with antigenic determinants and those playing essentially a structural role would contribute substantially to the understanding of antibody specificity.
... often, it could be a sign of a balance problem. Balance problems can make you feel unsteady or as ... fall-related injuries, such as hip fracture. Some balance problems are due to problems in the inner ...
ERIC Educational Resources Information Center
Ker, H. W.
2014-01-01
Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…
A feedback linearization approach to orbital maneuvers
NASA Astrophysics Data System (ADS)
Lee, Sanguk
New methods for obtaining optimal orbital maneuvers of a space vehicle in total velocity change are described and applied. The elegance of Lambert's Theorem is combined with feedback linearization and linear optimal control to obtain solutions to nonlinear orbital maneuver problems. In particular, geocentric orbital maneuvers with finite-thrust acceleration are studied. The full nonlinear equations of motion are transformed exactly into a controllable linear set in Brunovsky canonical form by using feedback linearization and choosing the position vector as the fully observable output vector. These equations are used to pose a linear optimal tracking problem with a solution to the Lambert's impulsive-thrust two-point boundary-value problem as the reference orbit. The same procedure is used to force the space vehicle to follow a linear analytical solution to the continuous low-thrust orbital maneuver problem between neighboring orbits. Limits on thrust magnitudes are enforced by adjusting the weights on the states in the performance index, which is chosen to be the sum of integrals of the square sum of new control variables and the square sum of state variable errors from the reference trajectory. For comparison purpose, the feedback linearized equations are used to obtain a simple closed-form solution to an orbital maneuver problem without the use of a reference trajectory. In this case, the performance index was chosen as the integral of the square sum of new control variables only. Three different examples, coplanar rendezvous between neighboring orbits, large coplanar orbit transfer, and non-coplanar orbit transfer, are used to show the advantages of using the new methods introduced in this dissertation. The minimum-eccentricity orbit, Hohmann transfer orbit, and minimum energy orbit were used in turn as the reference trajectories. The principal problems encountered in using the new methods are the choices of the proper reference trajectory, a suitable time
NASA Technical Reports Server (NTRS)
Holloway, Sidney E., III (Inventor); Crossley, Edward A., Jr. (Inventor); Jones, Irby W. (Inventor); Miller, James B. (Inventor); Davis, C. Calvin (Inventor); Behun, Vaughn D. (Inventor); Goodrich, Lewis R., Sr. (Inventor)
1992-01-01
A linear mass actuator includes an upper housing and a lower housing connectable to each other and having a central passageway passing axially through a mass that is linearly movable in the central passageway. Rollers mounted in the upper and lower housings in frictional engagement with the mass translate the mass linearly in the central passageway and drive motors operatively coupled to the roller means, for rotating the rollers and driving the mass axially in the central passageway.
Linear phase compressive filter
McEwan, Thomas E.
1995-01-01
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.
Linear phase compressive filter
McEwan, T.E.
1995-06-06
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.
Fault tolerant linear actuator
Tesar, Delbert
2004-09-14
In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.
Linear regression in astronomy. II
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.; Babu, Gutti J.
1992-01-01
A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.
Linear Corrugating - Final Technical Report
Lloyd Chapman
2000-05-23
Linear Corrugating is a process for the manufacture of corrugated containers in which the flutes of the corrugated medium are oriented in the Machine Direction (MD) of the several layers of paper used. Conversely, in the conventional corrugating process the flutes are oriented at right angles to the MD in the Cross Machine Direction (CD). Paper is stronger in MD than in CD. Therefore, boxes made using the Linear Corrugating process are significantly stronger-in the prime strength criteria, Box Compression Test (BCT) than boxes made conventionally. This means that using Linear Corrugating boxes can be manufactured to BCT equaling conventional boxes but containing 30% less fiber. The corrugated container industry is a large part of the U.S. economy, producing over 40 million tons annually. For such a large industry, the potential savings of Linear Corrugating are enormous. The grant for this project covered three phases in the development of the Linear Corrugating process: (1) Production and evaluation of corrugated boxes on commercial equipment to verify that boxes so manufactured would have enhanced BCT as proposed in the application; (2) Production and evaluation of corrugated boxes made on laboratory equipment using combined board from (1) above but having dual manufactures joints (glue joints). This box manufacturing method (Dual Joint) is proposed to overcome box perimeter limitations of the Linear Corrugating process; (3) Design, Construction, Operation and Evaluation of an engineering prototype machine to form flutes in corrugating medium in the MD of the paper. This operation is the central requirement of the Linear Corrugating process. Items I and II were successfully completed, showing predicted BCT increases from the Linear Corrugated boxes and significant strength improvement in the Dual Joint boxes. The Former was constructed and operated successfully using kraft linerboard as the forming medium. It was found that tensile strength and stretch
Finite Element Interface to Linear Solvers
Williams, Alan
2005-03-18
Sparse systems of linear equations arise in many engineering applications, including finite elements, finite volumes, and others. The solution of linear systems is often the most computationally intensive portion of the application. Depending on the complexity of problems addressed by the application, there may be no single solver capable of solving all of the linear systems that arise. This motivates the desire to switch an application from one solver librwy to another, depending on the problem being solved. The interfaces provided by solver libraries differ greatly, making it difficult to switch an application code from one library to another. The amount of library-specific code in an application Can be greatly reduced by having an abstraction layer between solver libraries and the application, putting a common "face" on various solver libraries. One such abstraction layer is the Finite Element Interface to Linear Solvers (EEl), which has seen significant use by finite element applications at Sandia National Laboratories and Lawrence Livermore National Laboratory.
Linearly polarized fiber amplifier
Kliner, Dahv A.; Koplow, Jeffery P.
2004-11-30
Optically pumped rare-earth-doped polarizing fibers exhibit significantly higher gain for one linear polarization state than for the orthogonal state. Such a fiber can be used to construct a single-polarization fiber laser, amplifier, or amplified-spontaneous-emission (ASE) source without the need for additional optical components to obtain stable, linearly polarized operation.
Richter, B.
1985-12-01
A report is given on the goals and progress of the SLAC Linear Collider. The status of the machine and the detectors are discussed and an overview is given of the physics which can be done at this new facility. Some ideas on how (and why) large linear colliders of the future should be built are given.
Spectral analysis of linear relations and degenerate operator semigroups
Baskakov, A G; Chernyshov, K I
2002-12-31
Several problems of the spectral theory of linear relations in Banach spaces are considered. Linear differential inclusions in a Banach space are studied. The construction of the phase space and solutions is carried out with the help of the spectral theory of linear relations, ergodic theorems, and degenerate operator semigroups.
NASA Technical Reports Server (NTRS)
Clancy, John P.
1988-01-01
The object of the invention is to provide a mechanical force actuator which is lightweight and manipulatable and utilizes linear motion for push or pull forces while maintaining a constant overall length. The mechanical force producing mechanism comprises a linear actuator mechanism and a linear motion shaft mounted parallel to one another. The linear motion shaft is connected to a stationary or fixed housing and to a movable housing where the movable housing is mechanically actuated through actuator mechanism by either manual means or motor means. The housings are adapted to releasably receive a variety of jaw or pulling elements adapted for clamping or prying action. The stationary housing is adapted to be pivotally mounted to permit an angular position of the housing to allow the tool to adapt to skewed interfaces. The actuator mechanisms is operated by a gear train to obtain linear motion of the actuator mechanism.
Linear models: permutation methods
Cade, B.S.; Everitt, B.S.; Howell, D.C.
2005-01-01
Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...
LAPACK: Linear algebra software for supercomputers
Bischof, C.H.
1991-01-01
This paper presents an overview of the LAPACK library, a portable, public-domain library to solve the most common linear algebra problems. This library provides a uniformly designed set of subroutines for solving systems of simultaneous linear equations, least-squares problems, and eigenvalue problems for dense and banded matrices. We elaborate on the design methodologies incorporated to make the LAPACK codes efficient on today's high-performance architectures. In particular, we discuss the use of block algorithms and the reliance on the Basic Linear Algebra Subprograms. We present performance results that show the suitability of the LAPACK approach for vector uniprocessors and shared-memory multiprocessors. We also address some issues that have to be dealt with in tuning LAPACK for specific architectures. Lastly, we present results that show that the LAPACK software can be adapted with little effort to distributed-memory environments, and we discuss future efforts resulting from this project. 31 refs., 10 figs., 2 tabs.
Singh, Mangal; Awasthi, Ashutosh; Soni, Sumit K; Singh, Rakshapal; Verma, Rajesh K; Kalra, Alok
2015-10-27
An assessment of roles of rhizospheric microbial diversity in plant growth is helpful in understanding plant-microbe interactions. Using random combinations of rhizospheric bacterial species at different richness levels, we analysed the contribution of species richness, compositions, interactions and identity on soil microbial respiration and plant biomass. We showed that bacterial inoculation in plant rhizosphere enhanced microbial respiration and plant biomass with complementary relationships among bacterial species. Plant growth was found to increase linearly with inoculation of rhizospheric bacterial communities with increasing levels of species or plant growth promoting trait diversity. However, inoculation of diverse bacterial communities having single plant growth promoting trait, i.e., nitrogen fixation could not enhance plant growth over inoculation of single bacteria. Our results indicate that bacterial diversity in rhizosphere affect ecosystem functioning through complementary relationship among plant growth promoting traits and may play significant roles in delivering microbial services to plants.
Singh, Mangal; Awasthi, Ashutosh; Soni, Sumit K.; Singh, Rakshapal; Verma, Rajesh K.; Kalra, Alok
2015-01-01
An assessment of roles of rhizospheric microbial diversity in plant growth is helpful in understanding plant-microbe interactions. Using random combinations of rhizospheric bacterial species at different richness levels, we analysed the contribution of species richness, compositions, interactions and identity on soil microbial respiration and plant biomass. We showed that bacterial inoculation in plant rhizosphere enhanced microbial respiration and plant biomass with complementary relationships among bacterial species. Plant growth was found to increase linearly with inoculation of rhizospheric bacterial communities with increasing levels of species or plant growth promoting trait diversity. However, inoculation of diverse bacterial communities having single plant growth promoting trait, i.e., nitrogen fixation could not enhance plant growth over inoculation of single bacteria. Our results indicate that bacterial diversity in rhizosphere affect ecosystem functioning through complementary relationship among plant growth promoting traits and may play significant roles in delivering microbial services to plants. PMID:26503744
Linearization of Schwarzschild's line element - Application to the clock paradox.
NASA Technical Reports Server (NTRS)
Broucke, R.
1971-01-01
This article studies the relativistic theory of the motion of a particle in the presence of a uniform acceleration field. The problem is introduced as a linearization of the fundamental line element of general relativity. The linearized line element is a solution of Einstein's field equations. The equations of geodesics corresponding to this line element are solved and applied to the clock paradox problem.-
THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)
This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...
Multicollinearity in hierarchical linear models.
Yu, Han; Jiang, Shanhe; Land, Kenneth C
2015-09-01
This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model.
A Linear Bicharacteristic FDTD Method
NASA Technical Reports Server (NTRS)
Beggs, John H.
2001-01-01
The linear bicharacteristic scheme (LBS) was originally developed to improve unsteady solutions in computational acoustics and aeroacoustics [1]-[7]. It is a classical leapfrog algorithm, but is combined with upwind bias in the spatial derivatives. This approach preserves the time-reversibility of the leapfrog algorithm, which results in no dissipation, and it permits more flexibility by the ability to adopt a characteristic based method. The use of characteristic variables allows the LBS to treat the outer computational boundaries naturally using the exact compatibility equations. The LBS offers a central storage approach with lower dispersion than the Yee algorithm, plus it generalizes much easier to nonuniform grids. It has previously been applied to two and three-dimensional freespace electromagnetic propagation and scattering problems [3], [6], [7]. This paper extends the LBS to model lossy dielectric and magnetic materials. Results are presented for several one-dimensional model problems, and the FDTD algorithm is chosen as a convenient reference for comparison.
... version of this page please turn Javascript on. Balance Problems About Balance Problems Have you ever felt dizzy, lightheaded, or ... dizziness problem during the past year. Why Good Balance is Important Having good balance means being able ...
NASA Technical Reports Server (NTRS)
Studer, P. A. (Inventor)
1983-01-01
A linear magnetic bearing system having electromagnetic vernier flux paths in shunt relation with permanent magnets, so that the vernier flux does not traverse the permanent magnet, is described. Novelty is believed to reside in providing a linear magnetic bearing having electromagnetic flux paths that bypass high reluctance permanent magnets. Particular novelty is believed to reside in providing a linear magnetic bearing with a pair of axially spaced elements having electromagnets for establishing vernier x and y axis control. The magnetic bearing system has possible use in connection with a long life reciprocating cryogenic refrigerator that may be used on the space shuttle.
Repair of overheating linear accelerator
Barkley, Walter; Baldwin, William; Bennett, Gloria; Bitteker, Leo; Borden, Michael; Casados, Jeff; Fitzgerald, Daniel; Gorman, Fred; Johnson, Kenneth; Kurennoy, Sergey; Martinez, Alberto; O’Hara, James; Perez, Edward; Roller, Brandon; Rybarcyk, Lawrence; Stark, Peter; Stockton, Jerry
2004-01-01
Los Alamos Neutron Science Center (LANSCE) is a proton accelerator that produces high energy particle beams for experiments. These beams include neutrons and protons for diverse uses including radiography, isotope production, small feature study, lattice vibrations and material science. The Drift Tube Linear Accelerator (DTL) is the first portion of a half mile long linear section of accelerator that raises the beam energy from 750 keV to 100 MeV. In its 31st year of operation (2003), the DTL experienced serious issues. The first problem was the inability to maintain resonant frequency at full power. The second problem was increased occurrences of over-temperature failure of cooling hoses. These shortcomings led to an investigation during the 2003 yearly preventative maintenance shutdown that showed evidence of excessive heating: discolored interior tank walls and coper oxide deposition in the cooling circuits. Since overheating was suspected to be caused by compromised heat transfer, improving that was the focus of the repair effort. Investigations revealed copper oxide flow inhibition and iron oxide scale build up. Acid cleaning was implemented with careful attention to protection of the base metal, selection of components to clean and minimization of exposure times. The effort has been very successful in bringing the accelerator through a complete eight month run cycle allowing an incredible array of scientific experiments to be completed this year (2003-2004). This paper will describe the systems, investigation analysis, repair, return to production and conclusion.
Limit cycles of linear vector fields on manifolds
NASA Astrophysics Data System (ADS)
Llibre, Jaume; Zhang, Xiang
2016-10-01
It is well known that linear vector fields on the manifold {{{R}}n} cannot have limit cycles, but this is not the case for linear vector fields on other manifolds. We study the periodic orbits of linear vector fields on different manifolds, and motivate and present an open problem on the number of limit cycles of linear vector fields on a class of {{C}1} connected manifold.
... is the device most commonly used for external beam radiation treatments for patients with cancer. The linear ... shape of the patient's tumor and the customized beam is directed to the patient's tumor. The beam ...
Isolated linear blaschkoid psoriasis.
Nasimi, M; Abedini, R; Azizpour, A; Nikoo, A
2016-10-01
Linear psoriasis (LPs) is considered a rare clinical presentation of psoriasis, which is characterized by linear erythematous and scaly lesions along the lines of Blaschko. We report the case of a 20-year-old man who presented with asymptomatic linear and S-shaped erythematous, scaly plaques on right side of his trunk. The plaques were arranged along the lines of Blaschko with a sharp demarcation at the midline. Histological examination of a skin biopsy confirmed the diagnosis of psoriasis. Topical calcipotriol and betamethasone dipropionate ointments were prescribed for 2 months. A good clinical improvement was achieved, with reduction in lesion thickness and scaling. In patients with linear erythematous and scaly plaques along the lines of Blaschko, the diagnosis of LPs should be kept in mind, especially in patients with asymptomatic lesions of late onset. PMID:27663156
Lorentz Invariance Violation: the Latest Fermi Results and the GRB-AGN Complementarity
NASA Technical Reports Server (NTRS)
Bolmont, J.; Vasileiou, V.; Jacholkowska, A.; Piron, F.; Couturier, C.; Granot, J.; Stecker, F. W.; Cohen-Tanugi, J.; Longo, F.
2013-01-01
Because they are bright and distant, Gamma-ray Bursts (GRBs) have been used for more than a decade to test propagation of photons and to constrain relevant Quantum Gravity (QG) models in which the velocity of photons in vacuum can depend on their energy. With its unprecedented sensitivity and energy coverage, the Fermi satellite has provided the most constraining results on the QG energy scale so far. In this talk, the latest results obtained from the analysis of four bright GRBs observed by the Large Area Telescope will be reviewed. These robust results, cross-checked using three different analysis techniques set the limit on QG energy scale at E(sub QG,1) greater than 7.6 times the Planck energy for linear dispersion and E(sub QG,2) greater than 1.3 x 10(exp 11) gigaelectron volts for quadratic dispersion (95% CL). After describing the data and the analysis techniques in use, results will be discussed and confronted to latest constraints obtained with Active Galactic Nuclei.
NASA Technical Reports Server (NTRS)
Laughlin, Darren
1995-01-01
Inertial linear actuators developed to suppress residual accelerations of nominally stationary or steadily moving platforms. Function like long-stroke version of voice coil in conventional loudspeaker, with superimposed linear variable-differential transformer. Basic concept also applicable to suppression of vibrations of terrestrial platforms. For example, laboratory table equipped with such actuators plus suitable vibration sensors and control circuits made to vibrate much less in presence of seismic, vehicular, and other environmental vibrational disturbances.
Shetty, Shricharith; Rao, Raghavendra; Kudva, R Ranjini; Subramanian, Kumudhini
2016-01-01
Alopecia areata (AA) over scalp is known to present in various shapes and extents of hair loss. Typically it presents as circumscribed patches of alopecia with underlying skin remaining normal. We describe a rare variant of AA presenting in linear band-like form. Only four cases of linear alopecia have been reported in medical literature till today, all four being diagnosed as lupus erythematosus profundus. PMID:27625568
Shetty, Shricharith; Rao, Raghavendra; Kudva, R Ranjini; Subramanian, Kumudhini
2016-01-01
Alopecia areata (AA) over scalp is known to present in various shapes and extents of hair loss. Typically it presents as circumscribed patches of alopecia with underlying skin remaining normal. We describe a rare variant of AA presenting in linear band-like form. Only four cases of linear alopecia have been reported in medical literature till today, all four being diagnosed as lupus erythematosus profundus.
Shetty, Shricharith; Rao, Raghavendra; Kudva, R Ranjini; Subramanian, Kumudhini
2016-01-01
Alopecia areata (AA) over scalp is known to present in various shapes and extents of hair loss. Typically it presents as circumscribed patches of alopecia with underlying skin remaining normal. We describe a rare variant of AA presenting in linear band-like form. Only four cases of linear alopecia have been reported in medical literature till today, all four being diagnosed as lupus erythematosus profundus. PMID:27625568
Sapijanskas, Jurgis; Loreau, Michel
2010-12-01
The influence of diversity on ecosystem functioning and ecosystem services is now well established. Yet predictive mechanistic models that link species traits and community-level processes remain scarce, particularly for multitrophic systems. Here we revisit MacArthur's classical consumer resource model and develop a trait-based approach to predict the effects of consumer diversity on cascading extinctions and aggregated ecosystem processes in a two-trophic-level system. We show that functionally redundant efficient consumers generate top-down cascading extinctions. This counterintuitive result reveals the limits of the functional redundancy concept to predict the consequences of species deletion. Our model also predicts that the biodiversity-ecosystem functioning relationship is different for different ecosystem processes and depends on the range of variation of consumer traits in the regional species pool, which determines the sign of selection effects. Lastly, competition among resources and consumer generalism both weaken complementarity effects, which suggests that selection effects may prevail at higher trophic levels. Our work emphasizes the potential of trait-based approaches for transforming biodiversity and ecosystem functioning research into a more predictive science.
Liu, Bitao; Li, Hongbo; Zhu, Biao; Koide, Roger T; Eissenstat, David M; Guo, Dali
2015-10-01
In most cases, both roots and mycorrhizal fungi are needed for plant nutrient foraging. Frequently, the colonization of roots by arbuscular mycorrhizal (AM) fungi seems to be greater in species with thick and sparsely branched roots than in species with thin and densely branched roots. Yet, whether a complementarity exists between roots and mycorrhizal fungi across these two types of root system remains unclear. We measured traits related to nutrient foraging (root morphology, architecture and proliferation, AM colonization and extramatrical hyphal length) across 14 coexisting AM subtropical tree species following root pruning and nutrient addition treatments. After root pruning, species with thinner roots showed more root growth, but lower mycorrhizal colonization, than species with thicker roots. Under multi-nutrient (NPK) addition, root growth increased, but mycorrhizal colonization decreased significantly, whereas no significant changes were found under nitrogen or phosphate additions. Moreover, root length proliferation was mainly achieved by altering root architecture, but not root morphology. Thin-root species seem to forage nutrients mainly via roots, whereas thick-root species rely more on mycorrhizal fungi. In addition, the reliance on mycorrhizal fungi was reduced by nutrient additions across all species. These findings highlight complementary strategies for nutrient foraging across coexisting species with contrasting root traits.
Mehra, J.
1987-05-01
In this paper, the main outlines of the discussions between Niels Bohr with Albert Einstein, Werner Heisenberg, and Erwin Schroedinger during 1920-1927 are treated. From the formulation of quantum mechanics in 1925-1926 and wave mechanics in 1926, there emerged Born's statistical interpretation of the wave function in summer 1926, and on the basis of the quantum mechanical transformation theory - formulated in fall 1926 by Dirac, London, and Jordan - Heisenberg formulated the uncertainty principle in early 1927. At the Volta Conference in Como in September 1927 and at the fifth Solvay Conference in Brussels the following month, Bohr publicly enunciated his complementarity principle, which had been developing in his mind for several years. The Bohr-Einstein discussions about the consistency and completeness of quantum mechanics and of physical theory as such - formally begun in October 1927 at the fifth Solvay Conference and carried on at the sixth Solvay Conference in October 1930 - were continued during the next decades. All these aspects are briefly summarized.
Dedouit, Fabrice; Géraut, Annie; Baranov, Vladimir; Ludes, Bertrand; Rougé, Daniel; Telmon, Norbert; Crubézy, Eric
2010-07-15
Since 2004, a multidisciplinary Franco-Russian expedition discovered in the Sakha Republic (Yakutiya) more than 60 tombs preserved by the permafrost. In July 2006, an exceptionally well-preserved mummy was unearthed. The coffin, burial furniture and clothes suggested a shaman's tomb. Multislice computed tomography (MSCT) was performed before autopsy with forensic and anthropological aims. Forensic study aimed to detect any lesions and determine the manner of death. Anthropological study aimed to determine the mummy's gender, age at death, morphological affinity, stature and body mass. She was female and virginity status was assessed. The radiological and forensic conclusions were compared. Imaging confirmed most autopsy findings, suggesting that death followed disseminated infection. MSCT could not formally exclude a traumatic death because close examination of the skin was difficult, but was superior to conventional autopsy in diagnosis of infectious lesions of the left sacroiliac joint and one pelvic lesion. Autopsy detected a post-infectious spinal lesion, misinterpreted on MSCT as a Schmorl's node. However, most conclusions of virtual and conventional anthropological studies agreed. Age at death was estimated around 19 years old. The morphology of the mummy was mongoloid. MSCT identified the craniometric characteristics as similar to those of the Buryat population. The deceased's stature was 146 cm and estimated body mass was 49 kg. MSCT demonstrated its great potential and complementarity with conventional autopsy and anthropological techniques in the study of this natural female mummy buried in 1728. PMID:20399045
Extended Decentralized Linear-Quadratic-Gaussian Control
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2000-01-01
A straightforward extension of a solution to the decentralized linear-Quadratic-Gaussian problem is proposed that allows its use for commonly encountered classes of problems that are currently solved with the extended Kalman filter. This extension allows the system to be partitioned in such a way as to exclude the nonlinearities from the essential algebraic relationships that allow the estimation and control to be optimally decentralized.
Word Problems: A "Meme" for Our Times.
ERIC Educational Resources Information Center
Leamnson, Robert N.
1996-01-01
Discusses a novel approach to word problems that involves linear relationships between variables. Argues that working stepwise through intermediates is the way our minds actually work and therefore this should be used in solving word problems. (JRH)
Character displacement and the evolution of niche complementarity in a model biofilm community
Ellis, Crystal N; Traverse, Charles C; Mayo-Smith, Leslie; Buskirk, Sean W; Cooper, Vaughn S
2015-01-01
Colonization of vacant environments may catalyze adaptive diversification and be followed by competition within the nascent community. How these interactions ultimately stabilize and affect productivity are central problems in evolutionary ecology. Diversity can emerge by character displacement, in which selection favors phenotypes that exploit an alternative resource and reduce competition, or by facilitation, in which organisms change the environment and enable different genotypes or species to become established. We previously developed a model of long-term experimental evolution in which bacteria attach to a plastic bead, form a biofilm, and disperse to a new bead. Here, we focus on the evolution of coexisting mutants within a population of Burkholderia cenocepacia and how their interactions affected productivity. Adaptive mutants initially competed for space, but later competition declined, consistent with character displacement and the predicted effects of the evolved mutations. The community reached a stable equilibrium as each ecotype evolved to inhabit distinct, complementary regions of the biofilm. Interactions among ecotypes ultimately became facilitative and enhanced mixed productivity. Observing the succession of genotypes within niches illuminated changing selective forces within the community, including a fundamental role for genotypes producing small colony variants that underpin chronic infections caused by B. cenocepacia. PMID:25494960
Character displacement and the evolution of niche complementarity in a model biofilm community.
Ellis, Crystal N; Traverse, Charles C; Mayo-Smith, Leslie; Buskirk, Sean W; Cooper, Vaughn S
2015-02-01
Colonization of vacant environments may catalyze adaptive diversification and be followed by competition within the nascent community. How these interactions ultimately stabilize and affect productivity are central problems in evolutionary ecology. Diversity can emerge by character displacement, in which selection favors phenotypes that exploit an alternative resource and reduce competition, or by facilitation, in which organisms change the environment and enable different genotypes or species to become established. We previously developed a model of long-term experimental evolution in which bacteria attach to a plastic bead, form a biofilm, and disperse to a new bead. Here, we focus on the evolution of coexisting mutants within a population of Burkholderia cenocepacia and how their interactions affected productivity. Adaptive mutants initially competed for space, but later competition declined, consistent with character displacement and the predicted effects of the evolved mutations. The community reached a stable equilibrium as each ecotype evolved to inhabit distinct, complementary regions of the biofilm. Interactions among ecotypes ultimately became facilitative and enhanced mixed productivity. Observing the succession of genotypes within niches illuminated changing selective forces within the community, including a fundamental role for genotypes producing small colony variants that underpin chronic infections caused by B. cenocepacia. PMID:25494960
Character displacement and the evolution of niche complementarity in a model biofilm community.
Ellis, Crystal N; Traverse, Charles C; Mayo-Smith, Leslie; Buskirk, Sean W; Cooper, Vaughn S
2015-02-01
Colonization of vacant environments may catalyze adaptive diversification and be followed by competition within the nascent community. How these interactions ultimately stabilize and affect productivity are central problems in evolutionary ecology. Diversity can emerge by character displacement, in which selection favors phenotypes that exploit an alternative resource and reduce competition, or by facilitation, in which organisms change the environment and enable different genotypes or species to become established. We previously developed a model of long-term experimental evolution in which bacteria attach to a plastic bead, form a biofilm, and disperse to a new bead. Here, we focus on the evolution of coexisting mutants within a population of Burkholderia cenocepacia and how their interactions affected productivity. Adaptive mutants initially competed for space, but later competition declined, consistent with character displacement and the predicted effects of the evolved mutations. The community reached a stable equilibrium as each ecotype evolved to inhabit distinct, complementary regions of the biofilm. Interactions among ecotypes ultimately became facilitative and enhanced mixed productivity. Observing the succession of genotypes within niches illuminated changing selective forces within the community, including a fundamental role for genotypes producing small colony variants that underpin chronic infections caused by B. cenocepacia.
Optical systolic solutions of linear algebraic equations
NASA Technical Reports Server (NTRS)
Neuman, C. P.; Casasent, D.
1984-01-01
The philosophy and data encoding possible in systolic array optical processor (SAOP) were reviewed. The multitude of linear algebraic operations achievable on this architecture is examined. These operations include such linear algebraic algorithms as: matrix-decomposition, direct and indirect solutions, implicit and explicit methods for partial differential equations, eigenvalue and eigenvector calculations, and singular value decomposition. This architecture can be utilized to realize general techniques for solving matrix linear and nonlinear algebraic equations, least mean square error solutions, FIR filters, and nested-loop algorithms for control engineering applications. The data flow and pipelining of operations, design of parallel algorithms and flexible architectures, application of these architectures to computationally intensive physical problems, error source modeling of optical processors, and matching of the computational needs of practical engineering problems to the capabilities of optical processors are emphasized.
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
1993-01-01
A Linear Motion Encoding device for measuring the linear motion of a moving object is disclosed in which a light source is mounted on the moving object and a position sensitive detector such as an array photodetector is mounted on a nearby stationary object. The light source emits a light beam directed towards the array photodetector such that a light spot is created on the array. An analog-to-digital converter, connected to the array photodetector is used for reading the position of the spot on the array photodetector. A microprocessor and memory is connected to the analog-to-digital converter to hold and manipulate data provided by the analog-to-digital converter on the position of the spot and to compute the linear displacement of the moving object based upon the data from the analog-to-digital converter.
Linear solvers on multiprocessor machines
Kalogerakis, M.A.
1986-01-01
Two new methods are introduced for the parallel solution of banded linear systems on multiprocessor machines. Moreover, some new techniques are obtained as variations of the two methods that are applicable to special instances of the problem. Comparisons with the best known methods are performed, from which it is concluded that the two methods are superior, while their variations for special instances are, in general, competitive and in some cases best. In the process, some new results on the parallel prefix problem are obtained and a new design for this problem is presented that is suitable for VLSI implementation. Furthermore, a general model is introduced for the analysis and classification of methods that are based on row transformations of matrices. It is seen that most known methods are included in this model. It is demonstrated that this model may be used as a basis for the analysis as well as the generation of important aspects of those methods, such as their arithmetic complexity and interprocessor communication requirements.
Hierarchical Linear Modeling of Creative Artists' Problem Solving Behaviors
ERIC Educational Resources Information Center
Kozbelt, Aaron
2008-01-01
College art students were videotaped creating original drawings from an array of objects. Judges reliably assessed the creativity of the drawings. Videos of the creation of ten high- and ten low-rated drawings were coded frame-by-frame to quantify the extent to which artists engaged in several categories of activities (selecting objects, selecting…
Conservation laws for multidimensional systems and related linear algebra problems
NASA Astrophysics Data System (ADS)
Igonin, Sergei
2002-12-01
We consider multidimensional systems of PDEs of generalized evolution form with t-derivatives of arbitrary order on the left-hand side and with the right-hand side dependent on lower order t-derivatives and arbitrary space derivatives. For such systems we find an explicit necessary condition for the existence of higher conservation laws in terms of the system's symbol. For systems that violate this condition we give an effective upper bound on the order of conservation laws. Using this result, we completely describe conservation laws for viscous transonic equations, for the Brusselator model and the Belousov-Zhabotinskii system. To achieve this, we solve over an arbitrary field the matrix equations SA = AtS and SA = -AtS for a quadratic matrix A and its transpose At, which may be of independent interest.
Radiation Hydrodynamics Test Problems with Linear Velocity Profiles
Hendon, Raymond C.; Ramsey, Scott D.
2012-08-22
As an extension of the works of Coggeshall and Ramsey, a class of analytic solutions to the radiation hydrodynamics equations is derived for code verification purposes. These solutions are valid under assumptions including diffusive radiation transport, a polytropic gas equation of state, constant conductivity, separable flow velocity proportional to the curvilinear radial coordinate, and divergence-free heat flux. In accordance with these assumptions, the derived solution class is mathematically invariant with respect to the presence of radiative heat conduction, and thus represents a solution to the compressible flow (Euler) equations with or without conduction terms included. With this solution class, a quantitative code verification study (using spatial convergence rates) is performed for the cell-centered, finite volume, Eulerian compressible flow code xRAGE developed at Los Alamos National Laboratory. Simulation results show near second order spatial convergence in all physical variables when using the hydrodynamics solver only, consistent with that solver's underlying order of accuracy. However, contrary to the mathematical properties of the solution class, when heat conduction algorithms are enabled the calculation does not converge to the analytic solution.
... daily activities, get around, and exercise. Having a problem with walking can make daily life more difficult. ... walk is called your gait. A variety of problems can cause an abnormal gait and lead to ...
... re not getting enough air. Sometimes mild breathing problems are from a stuffy nose or hard exercise. ... emphysema or pneumonia cause breathing difficulties. So can problems with your trachea or bronchi, which are part ...
... ankles and toes. Other types of arthritis include gout or pseudogout. Sometimes, there is a mechanical problem ... for more information on osteoarthritis, rheumatoid arthritis and gout. How Common are Joint Problems? Osteoarthritis, which affects ...
A minimum time control algorithm for linear and nonlinear systems
NASA Technical Reports Server (NTRS)
Wen, J.; Desrochers, A. A.
1985-01-01
The minimum time control problem with bounded control has long been of interest to control engineers. Much of the theoretical study of this problem has been limited to linear systems and the results are usually problem-specific. This paper presents a new computational method for solving the minimum-time control problem when the control action is assumed to be bang-bang. A gradient-based algorithm is developed where the switching times are updated to minimize the final state missed distance. The algorithm is applicable to linear and nonlinear problems, including multi-input systems.
Individualized Math Problems in Simple Equations. Oregon Vo-Tech Mathematics Problem Sets.
ERIC Educational Resources Information Center
Cosler, Norma, Ed.
This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this volume require solution of linear equations, systems…
Irreducible Characters of General Linear Superalgebra and Super Duality
NASA Astrophysics Data System (ADS)
Cheng, Shun-Jen; Lam, Ngau
2010-09-01
We develop a new method to solve the irreducible character problem for a wide class of modules over the general linear superalgebra, including all the finite-dimensional modules, by directly relating the problem to the classical Kazhdan-Lusztig theory. Furthermore, we prove that certain parabolic BGG categories over the general linear algebra and over the general linear superalgebra are equivalent. We also verify a parabolic version of a conjecture of Brundan on the irreducible characters in the BGG category of the general linear superalgebra.
ERIC Educational Resources Information Center
Hale, Norman; Lindelow, John
Chapter 12 in a volume on school leadership, this chapter cites the work of several authorities concerning problem-solving or decision-making techniques based on the belief that group problem-solving effort is preferable to individual effort. The first technique, force-field analysis, is described as a means of dissecting complex problems into…
Improved Electrohydraulic Linear Actuators
NASA Technical Reports Server (NTRS)
Hamtil, James
2004-01-01
A product line of improved electrohydraulic linear actuators has been developed. These actuators are designed especially for use in actuating valves in rocket-engine test facilities. They are also adaptable to many industrial uses, such as steam turbines, process control valves, dampers, motion control, etc. The advantageous features of the improved electrohydraulic linear actuators are best described with respect to shortcomings of prior electrohydraulic linear actuators that the improved ones are intended to supplant. The flow of hydraulic fluid to the two ports of the actuator cylinder is controlled by a servo valve that is controlled by a signal from a servo amplifier that, in turn, receives an analog position-command signal (a current having a value between 4 and 20 mA) from a supervisory control system of the facility. As the position command changes, the servo valve shifts, causing a greater flow of hydraulic fluid to one side of the cylinder and thereby causing the actuator piston to move to extend or retract a piston rod from the actuator body. A linear variable differential transformer (LVDT) directly linked to the piston provides a position-feedback signal, which is compared with the position-command signal in the servo amplifier. When the position-feedback and position-command signals match, the servo valve moves to its null position, in which it holds the actuator piston at a steady position.
Duck, F
2010-01-01
The propagation of acoustic waves is a fundamentally non-linear process, and only waves with infinitesimally small amplitudes may be described by linear expressions. In practice, all ultrasound propagation is associated with a progressive distortion in the acoustic waveform and the generation of frequency harmonics. At the frequencies and amplitudes used for medical diagnostic scanning, the waveform distortion can result in the formation of acoustic shocks, excess deposition of energy, and acoustic saturation. These effects occur most strongly when ultrasound propagates within liquids with comparatively low acoustic attenuation, such as water, amniotic fluid, or urine. Attenuation by soft tissues limits but does not extinguish these non-linear effects. Harmonics may be used to create tissue harmonic images. These offer improvements over conventional B-mode images in spatial resolution and, more significantly, in the suppression of acoustic clutter and side-lobe artefacts. The quantity B/A has promise as a parameter for tissue characterization, but methods for imaging B/A have shown only limited success. Standard methods for the prediction of tissue in-situ exposure from acoustic measurements in water, whether for regulatory purposes, for safety assessment, or for planning therapeutic regimes, may be in error because of unaccounted non-linear losses. Biological effects mechanisms are altered by finite-amplitude effects. PMID:20349813
NASA Technical Reports Server (NTRS)
Chandler, J. A. (Inventor)
1985-01-01
The linear motion valve is described. The valve spool employs magnetically permeable rings, spaced apart axially, which engage a sealing assembly having magnetically permeable pole pieces in magnetic relationship with a magnet. The gap between the ring and the pole pieces is sealed with a ferrofluid. Depletion of the ferrofluid is minimized.
Resistors Improve Ramp Linearity
NASA Technical Reports Server (NTRS)
Kleinberg, L. L.
1982-01-01
Simple modification to bootstrap ramp generator gives more linear output over longer sweep times. New circuit adds just two resistors, one of which is adjustable. Modification cancels nonlinearities due to variations in load on charging capacitor and due to changes in charging current as the voltage across capacitor increases.
ERIC Educational Resources Information Center
Dobbs, David E.
2013-01-01
A direct method is given for solving first-order linear recurrences with constant coefficients. The limiting value of that solution is studied as "n to infinity." This classroom note could serve as enrichment material for the typical introductory course on discrete mathematics that follows a calculus course.
The 'hard problem' and the quantum physicists. Part 1: the first generation.
Smith, C U M
2006-07-01
All four of the most important figures in the early twentieth-century development of quantum physics-Niels Bohr, Erwin Schroedinger, Werner Heisenberg and Wolfgang Pauli-had strong interests in the traditional mind-brain, or 'hard,' problem. This paper reviews their approach to this problem, showing the influence of Bohr's complementarity thesis, the significance of Schroedinger's small book, 'What is life?,' the updated Platonism of Heisenberg and, perhaps most interesting of all, the interaction of Carl Jung and Wolfgang Pauli in the latter's search for a unification of mind and matter.
Postma, Johannes A.; Lynch, Jonathan P.
2012-01-01
Background and Aims During their domestication, maize, bean and squash evolved in polycultures grown by small-scale farmers in the Americas. Polycultures often overyield on low-fertility soils, which are a primary production constraint in low-input agriculture. We hypothesized that root architectural differences among these crops causes niche complementarity and thereby greater nutrient acquisition than corresponding monocultures. Methods A functional–structural plant model, SimRoot, was used to simulate the first 40 d of growth of these crops in monoculture and polyculture and to determine the effects of root competition on nutrient uptake and biomass production of each plant on low-nitrogen, -phosphorus and -potassium soils. Key Results Squash, the earliest domesticated crop, was most sensitive to low soil fertility, while bean, the most recently domesticated crop, was least sensitive to low soil fertility. Nitrate uptake and biomass production were up to 7 % greater in the polycultures than in the monocultures, but only when root architecture was taken into account. Enhanced nitrogen capture in polycultures was independent of nitrogen fixation by bean. Root competition had negligible effects on phosphorus or potassium uptake or biomass production. Conclusions We conclude that spatial niche differentiation caused by differences in root architecture allows polycultures to overyield when plants are competing for mobile soil resources. However, direct competition for immobile resources might be negligible in agricultural systems. Interspecies root spacing may also be too large to allow maize to benefit from root exudates of bean or squash. Above-ground competition for light, however, may have strong feedbacks on root foraging for immobile nutrients, which may increase cereal growth more than it will decrease the growth of the other crops. We note that the order of domestication of crops correlates with increasing nutrient efficiency, rather than production
NASA Astrophysics Data System (ADS)
Thum, T.; Peylin, P.; Granier, A.; Ibrom, A.; Linden, L.; Loustau, D.; Bacour, C.; Ciais, P.
2010-12-01
Assimilation of data from several measurements provides knowledge of the model's performance and uncertainties. In this work we investigate the complementary of Biomass data to net CO2 flux (NEE) and latent heat flux (LE) in optimising parameters of the biogeochemical model ORCHIDEE. Our optimisation method is a gradient based iterative method. We optimized the model at the French forest sites, European beech forest of Hesse (48 .67°N, 7.06°E) and maritime pine forest of Le Bray (44.72°N, 0.77°W). First we adapted the model to represent the past clearcut on these two sites in order to obtain a realistic age of the forest. The model-data improvement in terms of aboveground biomass will be discussed. We then used FluxNet and Biomass data, separately and altogether, in the optimization process to assess the potential and the complementarities of these two data stream. For biomass data optimization we added parameters linked to allocation to the optimization scheme. The results show a decrease in the uncertainty of the parameters after optimization and reveal some structural deficiencies in the model. In a second step, data from ecosystem manipulation experiment site Brandbjerg (55.88°N, 11.97°E), a Danish grassland site, were used for model optimisation. The different ecosystem experiments at this site include rain exclusion, warming, and increased CO2 concentration, and only biomass data were available and used in the optimization for the different treatments. We investigate the ability of the model to represent the biomass differences between manipulative experiments with a given set of parameters and highlight model deficiencies.
PC Basic Linear Algebra Subroutines
1992-03-09
PC-BLAS is a highly optimized version of the Basic Linear Algebra Subprograms (BLAS), a standardized set of thirty-eight routines that perform low-level operations on vectors of numbers in single and double-precision real and complex arithmetic. Routines are included to find the index of the largest component of a vector, apply a Givens or modified Givens rotation, multiply a vector by a constant, determine the Euclidean length, perform a dot product, swap and copy vectors, andmore » find the norm of a vector. The BLAS have been carefully written to minimize numerical problems such as loss of precision and underflow and are designed so that the computation is independent of the interface with the calling program. This independence is achieved through judicious use of Assembly language macros. Interfaces are provided for Lahey Fortran 77, Microsoft Fortran 77, and Ryan-McFarland IBM Professional Fortran.« less
NASA Technical Reports Server (NTRS)
Goldowsky, Michael P. (Inventor)
1987-01-01
A reciprocating linear motor is formed with a pair of ring-shaped permanent magnets having opposite radial polarizations, held axially apart by a nonmagnetic yoke, which serves as an axially displaceable armature assembly. A pair of annularly wound coils having axial lengths which differ from the axial lengths of the permanent magnets are serially coupled together in mutual opposition and positioned with an outer cylindrical core in axial symmetry about the armature assembly. One embodiment includes a second pair of annularly wound coils serially coupled together in mutual opposition and an inner cylindrical core positioned in axial symmetry inside the armature radially opposite to the first pair of coils. Application of a potential difference across a serial connection of the two pairs of coils creates a current flow perpendicular to the magnetic field created by the armature magnets, thereby causing limited linear displacement of the magnets relative to the coils.
General linear chirplet transform
NASA Astrophysics Data System (ADS)
Yu, Gang; Zhou, Yiqi
2016-03-01
Time-frequency (TF) analysis (TFA) method is an effective tool to characterize the time-varying feature of a signal, which has drawn many attentions in a fairly long period. With the development of TFA, many advanced methods are proposed, which can provide more precise TF results. However, some restrictions are introduced inevitably. In this paper, we introduce a novel TFA method, termed as general linear chirplet transform (GLCT), which can overcome some limitations existed in current TFA methods. In numerical and experimental validations, by comparing with current TFA methods, some advantages of GLCT are demonstrated, which consist of well-characterizing the signal of multi-component with distinct non-linear features, being independent to the mathematical model and initial TFA method, allowing for the reconstruction of the interested component, and being non-sensitivity to noise.
NASA Technical Reports Server (NTRS)
Collins, Earl R., Jr.; Curry, Kenneth C.
1990-01-01
Electrically charged helices attract or repel each other. Proposed electrostatic linear actuator made with intertwined dual helices, which holds charge-bearing surfaces. Dual-helix configuration provides relatively large unbroken facing charged surfaces (relatively large electrostatic force) within small volume. Inner helix slides axially in outer helix in response to voltages applied to conductors. Spiral form also makes components more rigid. Actuator conceived to have few moving parts and to be operable after long intervals of inactivity.
Buttram, M.T.; Ginn, J.W.
1988-06-21
A linear induction accelerator includes a plurality of adder cavities arranged in a series and provided in a structure which is evacuated so that a vacuum inductance is provided between each adder cavity and the structure. An energy storage system for the adder cavities includes a pulsed current source and a respective plurality of bipolar converting networks connected thereto. The bipolar high-voltage, high-repetition-rate square pulse train sets and resets the cavities. 4 figs.
Relativistic Linear Restoring Force
ERIC Educational Resources Information Center
Clark, D.; Franklin, J.; Mann, N.
2012-01-01
We consider two different forms for a relativistic version of a linear restoring force. The pair comes from taking Hooke's law to be the force appearing on the right-hand side of the relativistic expressions: d"p"/d"t" or d"p"/d["tau"]. Either formulation recovers Hooke's law in the non-relativistic limit. In addition to these two forces, we…
Combustion powered linear actuator
Fischer, Gary J.
2007-09-04
The present invention provides robotic vehicles having wheeled and hopping mobilities that are capable of traversing (e.g. by hopping over) obstacles that are large in size relative to the robot and, are capable of operation in unpredictable terrain over long range. The present invention further provides combustion powered linear actuators, which can include latching mechanisms to facilitate pressurized fueling of the actuators, as can be used to provide wheeled vehicles with a hopping mobility.
ALPS - A LINEAR PROGRAM SOLVER
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
SLAPP: A systolic linear algebra parallel processor
Drake, B.L.; Luk, F.T.; Speiser, J.M.; Symanski, J.J.
1987-07-01
Systolic array computer architectures provide a means for fast computation of the linear algebra algorithms that form the building blocks of many signal-processing algorithms, facilitating their real-time computation. For applications to signal processing, the systolic array operates on matrices, an inherently parallel view of the data, using numerical linear algebra algorithms that have been suitably parallelized to efficiently utilize the available hardware. This article describes work currently underway at the Naval Ocean Systems Center, San Diego, California, to build a two-dimensional systolic array, SLAPP, demonstrating efficient and modular parallelization of key matric computations for real-time signal- and image-processing problems.
Reset stabilisation of positive linear systems
NASA Astrophysics Data System (ADS)
Zhao, Xudong; Yin, Yunfei; Shen, Jun
2016-09-01
In this paper, the problems of reset stabilisation for positive linear systems (PLSs) are investigated. Some properties relating to reset control of PLSs are first revealed. It is shown that these properties are different from the corresponding ones of general linear systems. Second, a class of periodic reset scheme is designed to exponentially stabilise an unstable PLS with a prescribed decay rate. Then, for a given PLS with reset control, some discussions on the upper bound of its decay rate are presented. Meanwhile, the reset stabilisation for PLSs in a special case is probed as well. Finally, two numerical examples are used to demonstrate the correctness and effectiveness of the obtained theoretical results.
Reachability analysis of rational eigenvalue linear systems
NASA Astrophysics Data System (ADS)
Xu, Ming; Chen, Liangyu; Zeng, Zhenbing; Li, Zhi-bin
2010-12-01
One of the key problems in the safety analysis of control systems is the exact computation of reachable state spaces for continuous-time systems. Issues related to the controllability and observability of these systems are well-studied in systems theory. However, there are not many results on reachability, even for general linear systems. In this study, we present a large class of linear systems with decidable reachable state spaces. This is approached by reducing the reachability analysis to real root isolation of exponential polynomials. Furthermore, we have implemented this method in a Maple package based on symbolic computation and applied to several examples successfully.
Pattern Search Methods for Linearly Constrained Minimization
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Torczon, Virginia
1998-01-01
We extend pattern search methods to linearly constrained minimization. We develop a general class of feasible point pattern search algorithms and prove global convergence to a Karush-Kuhn-Tucker point. As in the case of unconstrained minimization, pattern search methods for linearly constrained problems accomplish this without explicit recourse to the gradient or the directional derivative. Key to the analysis of the algorithms is the way in which the local search patterns conform to the geometry of the boundary of the feasible region.
Are bilinear quadrilaterals better than linear triangles?
D`Azevedo, E.F.
1993-08-01
This paper compares the theoretical effectiveness of bilinear approximation over quadrilaterals with linear approximation over triangles. Anisotropic mesh transformation is used to generate asymptotically optimally efficient meshes for piecewise linear interpolation over triangles and bilinear interpolation over quadrilaterals. The theory and numerical results suggest triangles may have a slight advantage over quadrilaterals for interpolating convex data function but bilinear approximation may offer a higher order approximation for saddle-shaped functions on a well-designed mesh. This work is a basic study on optimal meshes with the intention of gaining insight into the more complex meshing problems in finite element analysis.
A program for identification of linear systems
NASA Technical Reports Server (NTRS)
Buell, J.; Kalaba, R.; Ruspini, E.; Yakush, A.
1971-01-01
A program has been written for the identification of parameters in certain linear systems. These systems appear in biomedical problems, particularly in compartmental models of pharmacokinetics. The method presented here assumes that some of the state variables are regularly modified by jump conditions. This simulates administration of drugs following some prescribed drug regime. Parameters are identified by a least-square fit of the linear differential system to a set of experimental observations. The method is especially suited when the interval of observation of the system is very long.
Representation of linear orders.
Taylor, D A; Kim, J O; Sudevan, P
1984-01-01
Two binary classification tasks were used to explore the associative structure of linear orders. In Experiment 1, college students classified English letters as targets or nontargets, the targets being consecutive letters of the alphabet. The time to reject nontargets was a decreasing function of the distance from the target set, suggesting response interference mediated by automatic associations from the target to the nontarget letters. The way in which this interference effect depended on the placement of the boundaries between the target and nontarget sets revealed the relative strengths of individual interletter associations. In Experiment 2, students were assigned novel linear orders composed of letterlike symbols and asked to classify pairs of symbols as being adjacent or nonadjacent in the assigned sequence. Reaction time was found to be a joint function of the distance between any pair of symbols and the relative positions of those symbols within the sequence. The effects of both distance and position decreased systematically over 6 days of practice with a particular order, beginning at a level typical of unfamiliar orders and converging on a level characteristic of familiar orders such as letters and digits. These results provide an empirical unification of two previously disparate sets of findings in the literature on linear orders, those concerning familiar and unfamiliar orders, and the systematic transition between the two patterns of results suggests the gradual integration of a new associative structure.
NASA Astrophysics Data System (ADS)
Uhlmann, Armin
2016-03-01
This is an introduction to antilinear operators. In following Wigner the terminus antilinear is used as it is standard in Physics. Mathematicians prefer to say conjugate linear. By restricting to finite-dimensional complex-linear spaces, the exposition becomes elementary in the functional analytic sense. Nevertheless it shows the amazing differences to the linear case. Basics of antilinearity is explained in sects. 2, 3, 4, 7 and in sect. 1.2: Spectrum, canonical Hermitian form, antilinear rank one and two operators, the Hermitian adjoint, classification of antilinear normal operators, (skew) conjugations, involutions, and acq-lines, the antilinear counterparts of 1-parameter operator groups. Applications include the representation of the Lagrangian Grassmannian by conjugations, its covering by acq-lines. As well as results on equivalence relations. After remembering elementary Tomita-Takesaki theory, antilinear maps, associated to a vector of a two-partite quantum system, are defined. By allowing to write modular objects as twisted products of pairs of them, they open some new ways to express EPR and teleportation tasks. The appendix presents a look onto the rich structure of antilinear operator spaces.
Rees, J.R.
1989-10-01
April, 1989, the first Z zero particle was observed at the Stanford Linear Collider (SLC). The SLC collides high-energy beams of electrons and positrons into each other. In break with tradition the SLC aims two linear beams at each other. Strong motives impelled the Stanford team to choose the route of innovation. One reason being that linear colliders promise to be less expensive to build and operate than storage ring colliders. An equally powerful motive was the desire to build an Z zero factory, a facility at which the Z zero particle can be studied in detail. More than 200 Z zero particles have been detected at the SLC and more continue to be churned out regularly. It is in measuring the properties of the Z zero that the SLC has a seminal contribution to make. One of the primary goals of the SLC experimental program is to determine the mass of the Z zero as precisely as possible.In the end, the SLC's greatest significance will be in having proved a new accelerator technology. 7 figs.