Sample records for linear complementarity problem

  1. An improved error bound for linear complementarity problems for B-matrices.

    PubMed

    Gao, Lei; Li, Chaoqian

    2017-01-01

    A new error bound for the linear complementarity problem when the matrix involved is a B -matrix is presented, which improves the corresponding result in (Li et al. in Electron. J. Linear Algebra 31(1):476-484, 2016). In addition some sufficient conditions such that the new bound is sharper than that in (García-Esnaola and Peña in Appl. Math. Lett. 22(7):1071-1075, 2009) are provided.

  2. Genetic programming over context-free languages with linear constraints for the knapsack problem: first results.

    PubMed

    Bruhn, Peter; Geyer-Schulz, Andreas

    2002-01-01

    In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.

  3. Convergence analysis of modulus-based matrix splitting iterative methods for implicit complementarity problems.

    PubMed

    Wang, An; Cao, Yang; Shi, Quan

    2018-01-01

    In this paper, we demonstrate a complete version of the convergence theory of the modulus-based matrix splitting iteration methods for solving a class of implicit complementarity problems proposed by Hong and Li (Numer. Linear Algebra Appl. 23:629-641, 2016). New convergence conditions are presented when the system matrix is a positive-definite matrix and an [Formula: see text]-matrix, respectively.

  4. The fully actuated traffic control problem solved by global optimization and complementarity

    NASA Astrophysics Data System (ADS)

    Ribeiro, Isabel M.; de Lurdes de Oliveira Simões, Maria

    2016-02-01

    Global optimization and complementarity are used to determine the signal timing for fully actuated traffic control, regarding effective green and red times on each cycle. The average values of these parameters can be used to estimate the control delay of vehicles. In this article, a two-phase queuing system for a signalized intersection is outlined, based on the principle of minimization of the total waiting time for the vehicles. The underlying model results in a linear program with linear complementarity constraints, solved by a sequential complementarity algorithm. Departure rates of vehicles during green and yellow periods were treated as deterministic, while arrival rates of vehicles were assumed to follow a Poisson distribution. Several traffic scenarios were created and solved. The numerical results reveal that it is possible to use global optimization and complementarity over a reasonable number of cycles and determine with efficiency effective green and red times for a signalized intersection.

  5. Linear complementarity formulation for 3D frictional sliding problems

    USGS Publications Warehouse

    Kaven, Joern; Hickman, Stephen H.; Davatzes, Nicholas C.; Mutlu, Ovunc

    2012-01-01

    Frictional sliding on quasi-statically deforming faults and fractures can be modeled efficiently using a linear complementarity formulation. We review the formulation in two dimensions and expand the formulation to three-dimensional problems including problems of orthotropic friction. This formulation accurately reproduces analytical solutions to static Coulomb friction sliding problems. The formulation accounts for opening displacements that can occur near regions of non-planarity even under large confining pressures. Such problems are difficult to solve owing to the coupling of relative displacements and tractions; thus, many geomechanical problems tend to neglect these effects. Simple test cases highlight the importance of including friction and allowing for opening when solving quasi-static fault mechanics models. These results also underscore the importance of considering the effects of non-planarity in modeling processes associated with crustal faulting.

  6. New Existence Conditions for Order Complementarity Problems

    NASA Astrophysics Data System (ADS)

    Németh, S. Z.

    2009-09-01

    Complementarity problems are mathematical models of problems in economics, engineering and physics. A special class of complementarity problems are the order complementarity problems [2]. Order complementarity problems can be applied in lubrication theory [6] and economics [1]. The notion of exceptional family of elements for general order complementarity problems in Banach spaces will be introduced. It will be shown that for general order complementarity problems defined by completely continuous fields the problem has either a solution or an exceptional family of elements (for other notions of exceptional family of elements see [1, 2, 3, 4] and the related references therein). This solves a conjecture of [2] about the existence of exceptional family of elements for order complementarity problems. The proof can be done by using the Leray-Schauder alternative [5]. An application to integral operators will be given.

  7. Solution of monotone complementarity and general convex programming problems using a modified potential reduction interior point method

    DOE PAGES

    Huang, Kuo -Ling; Mehrotra, Sanjay

    2016-11-08

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  8. A linear complementarity method for the solution of vertical vehicle-track interaction

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Gao, Qiang; Wu, Feng; Zhong, Wan-Xie

    2018-02-01

    A new method is proposed for the solution of the vertical vehicle-track interaction including a separation between wheel and rail. The vehicle is modelled as a multi-body system using rigid bodies, and the track is treated as a three-layer beam model in which the rail is considered as an Euler-Bernoulli beam and both the sleepers and the ballast are represented by lumped masses. A linear complementarity formulation is directly established using a combination of the wheel-rail normal contact condition and the generalised-α method. This linear complementarity problem is solved using the Lemke algorithm, and the wheel-rail contact force can be obtained. Then the dynamic responses of the vehicle and the track are solved without iteration based on the generalised-α method. The same equations of motion for the vehicle and track are adopted at the different wheel-rail contact situations. This method can remove some restrictions, that is, time-dependent mass, damping and stiffness matrices of the coupled system, multiple equations of motion for the different contact situations and the effect of the contact stiffness. Numerical results demonstrate that the proposed method is effective for simulating the vehicle-track interaction including a separation between wheel and rail.

  9. Algebraic multigrid preconditioners for two-phase flow in porous media with phase transitions

    NASA Astrophysics Data System (ADS)

    Bui, Quan M.; Wang, Lu; Osei-Kuffuor, Daniel

    2018-04-01

    Multiphase flow is a critical process in a wide range of applications, including oil and gas recovery, carbon sequestration, and contaminant remediation. Numerical simulation of multiphase flow requires solving of a large, sparse linear system resulting from the discretization of the partial differential equations modeling the flow. In the case of multiphase multicomponent flow with miscible effect, this is a very challenging task. The problem becomes even more difficult if phase transitions are taken into account. A new approach to handle phase transitions is to formulate the system as a nonlinear complementarity problem (NCP). Unlike in the primary variable switching technique, the set of primary variables in this approach is fixed even when there is phase transition. Not only does this improve the robustness of the nonlinear solver, it opens up the possibility to use multigrid methods to solve the resulting linear system. The disadvantage of the complementarity approach, however, is that when a phase disappears, the linear system has the structure of a saddle point problem and becomes indefinite, and current algebraic multigrid (AMG) algorithms cannot be applied directly. In this study, we explore the effectiveness of a new multilevel strategy, based on the multigrid reduction technique, to deal with problems of this type. We demonstrate the effectiveness of the method through numerical results for the case of two-phase, two-component flow with phase appearance/disappearance. We also show that the strategy is efficient and scales optimally with problem size.

  10. Algebraic multigrid preconditioners for two-phase flow in porous media with phase transitions [Algebraic multigrid preconditioners for multiphase flow in porous media with phase transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bui, Quan M.; Wang, Lu; Osei-Kuffuor, Daniel

    Multiphase flow is a critical process in a wide range of applications, including oil and gas recovery, carbon sequestration, and contaminant remediation. Numerical simulation of multiphase flow requires solving of a large, sparse linear system resulting from the discretization of the partial differential equations modeling the flow. In the case of multiphase multicomponent flow with miscible effect, this is a very challenging task. The problem becomes even more difficult if phase transitions are taken into account. A new approach to handle phase transitions is to formulate the system as a nonlinear complementarity problem (NCP). Unlike in the primary variable switchingmore » technique, the set of primary variables in this approach is fixed even when there is phase transition. Not only does this improve the robustness of the nonlinear solver, it opens up the possibility to use multigrid methods to solve the resulting linear system. The disadvantage of the complementarity approach, however, is that when a phase disappears, the linear system has the structure of a saddle point problem and becomes indefinite, and current algebraic multigrid (AMG) algorithms cannot be applied directly. In this study, we explore the effectiveness of a new multilevel strategy, based on the multigrid reduction technique, to deal with problems of this type. We demonstrate the effectiveness of the method through numerical results for the case of two-phase, two-component flow with phase appearance/disappearance. In conclusion, we also show that the strategy is efficient and scales optimally with problem size.« less

  11. Algebraic multigrid preconditioners for two-phase flow in porous media with phase transitions [Algebraic multigrid preconditioners for multiphase flow in porous media with phase transitions

    DOE PAGES

    Bui, Quan M.; Wang, Lu; Osei-Kuffuor, Daniel

    2018-02-06

    Multiphase flow is a critical process in a wide range of applications, including oil and gas recovery, carbon sequestration, and contaminant remediation. Numerical simulation of multiphase flow requires solving of a large, sparse linear system resulting from the discretization of the partial differential equations modeling the flow. In the case of multiphase multicomponent flow with miscible effect, this is a very challenging task. The problem becomes even more difficult if phase transitions are taken into account. A new approach to handle phase transitions is to formulate the system as a nonlinear complementarity problem (NCP). Unlike in the primary variable switchingmore » technique, the set of primary variables in this approach is fixed even when there is phase transition. Not only does this improve the robustness of the nonlinear solver, it opens up the possibility to use multigrid methods to solve the resulting linear system. The disadvantage of the complementarity approach, however, is that when a phase disappears, the linear system has the structure of a saddle point problem and becomes indefinite, and current algebraic multigrid (AMG) algorithms cannot be applied directly. In this study, we explore the effectiveness of a new multilevel strategy, based on the multigrid reduction technique, to deal with problems of this type. We demonstrate the effectiveness of the method through numerical results for the case of two-phase, two-component flow with phase appearance/disappearance. In conclusion, we also show that the strategy is efficient and scales optimally with problem size.« less

  12. Multi-period equilibrium/near-equilibrium in electricity markets based on locational marginal prices

    NASA Astrophysics Data System (ADS)

    Garcia Bertrand, Raquel

    In this dissertation we propose an equilibrium procedure that coordinates the point of view of every market agent resulting in an equilibrium that simultaneously maximizes the independent objective of every market agent and satisfies network constraints. Therefore, the activities of the generating companies, consumers and an independent system operator are modeled: (1) The generating companies seek to maximize profits by specifying hourly step functions of productions and minimum selling prices, and bounds on productions. (2) The goals of the consumers are to maximize their economic utilities by specifying hourly step functions of demands and maximum buying prices, and bounds on demands. (3) The independent system operator then clears the market taking into account consistency conditions as well as capacity and line losses so as to achieve maximum social welfare. Then, we approach this equilibrium problem using complementarity theory in order to have the capability of imposing constraints on dual variables, i.e., on prices, such as minimum profit conditions for the generating units or maximum cost conditions for the consumers. In this way, given the form of the individual optimization problems, the Karush-Kuhn-Tucker conditions for the generating companies, the consumers and the independent system operator are both necessary and sufficient. The simultaneous solution to all these conditions constitutes a mixed linear complementarity problem. We include minimum profit constraints imposed by the units in the market equilibrium model. These constraints are added as additional constraints to the equivalent quadratic programming problem of the mixed linear complementarity problem previously described. For the sake of clarity, the proposed equilibrium or near-equilibrium is first developed for the particular case considering only one time period. Afterwards, we consider an equilibrium or near-equilibrium applied to a multi-period framework. This model embodies binary decisions, i.e., on/off status for the units, and therefore optimality conditions cannot be directly applied. To avoid limitations provoked by binary variables, while retaining the advantages of using optimality conditions, we define the multi-period market equilibrium using Benders decomposition, which allows computing binary variables through the master problem and continuous variables through the subproblem. Finally, we illustrate these market equilibrium concepts through several case studies.

  13. Variationally consistent discretization schemes and numerical algorithms for contact problems

    NASA Astrophysics Data System (ADS)

    Wohlmuth, Barbara

    We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of possible applications and show the performance of the space discretization scheme, non-linear solver, adaptive refinement process and time integration.

  14. The Solution of Linear Complementarity Problems on an Array Processor.

    DTIC Science & Technology

    1981-01-01

    INITIALIZE T04E 4IASK COMON /1SCA/M1AA ITERAIIJ)NSp NIUvld ITEWAILUNSPNUJ4d RUMboaJNI Co6 C3MAON /ISCA/I GRIL )POINTSo Y LiRIUPOINTS CDMAION /SUaLMjAT...GRI1D# WIDTH GRIfl LOGICAL MASWI MASK MASK INTEGE" X GRIL )POINTSo Y GRIOPUINTS 14JTEGEM MAX ITERATIONS# NUMB ITERArIONS9 NIJMO ROPIS, NUMB COLS C LOCAL

  15. Multigrid Algorithms for the Solution of Linear Complementarity Problems Arising from Free Boundary Problems.

    DTIC Science & Technology

    1980-10-01

    faster than previous algorithms. Indeed, with only minor modifications, the standard multigrid programs solve the LCP with essentially the same efficiency... Lemna 2.2. Let Uk be the solution of the LCP (2.3), and let uk > 0 be an approximate solu- tion obtained after one or more Gk projected sweeps. Let...in Figure 3.2, Ivu IIG decreased from .293 10 to .110 10 with the expenditure of (99.039-94.400) = 4.639 work units. While minor variations do arise, a

  16. Calibration of Lévy Processes with American Options

    NASA Astrophysics Data System (ADS)

    Achdou, Yves

    We study options on financial assets whose discounted prices are exponential of Lévy processes. The price of an American vanilla option as a function of the maturity and the strike satisfies a linear complementarity problem involving a non-local partial integro-differential operator. It leads to a variational inequality in a suitable weighted Sobolev space. Calibrating the Lévy process may be done by solving an inverse least square problem where the state variable satisfies the previously mentioned variational inequality. We first assume that the volatility is positive: after carefully studying the direct problem, we propose necessary optimality conditions for the least square inverse problem. We also consider the direct problem when the volatility is zero.

  17. Nonnegative constraint quadratic program technique to enhance the resolution of γ spectra

    NASA Astrophysics Data System (ADS)

    Li, Jinglun; Xiao, Wuyun; Ai, Xianyun; Chen, Ye

    2018-04-01

    Two concepts of the nonnegative least squares problem (NNLS) and the linear complementarity problem (LCP) are introduced for the resolution enhancement of the γ spectra. The respective algorithms such as the active set method and the primal-dual interior point method are applied to solve the above two problems. In mathematics, the nonnegative constraint results in the sparsity of the optimal solution of the deconvolution, and it is this sparsity that enhances the resolution. Finally, a comparison in the peak position accuracy and the computation time is made between these two methods and the boosted L_R and Gold methods.

  18. H∞ control for uncertain linear system over networks with Bernoulli data dropout and actuator saturation.

    PubMed

    Yu, Jimin; Yang, Chenchen; Tang, Xiaoming; Wang, Ping

    2018-03-01

    This paper investigates the H ∞ control problems for uncertain linear system over networks with random communication data dropout and actuator saturation. The random data dropout process is modeled by a Bernoulli distributed white sequence with a known conditional probability distribution and the actuator saturation is confined in a convex hull by introducing a group of auxiliary matrices. By constructing a quadratic Lyapunov function, effective conditions for the state feedback-based H ∞ controller and the observer-based H ∞ controller are proposed in the form of non-convex matrix inequalities to take the random data dropout and actuator saturation into consideration simultaneously, and the problem of non-convex feasibility is solved by applying cone complementarity linearization (CCL) procedure. Finally, two simulation examples are given to demonstrate the effectiveness of the proposed new design techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Private algebras in quantum information and infinite-dimensional complementarity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crann, Jason, E-mail: jason-crann@carleton.ca; Laboratoire de Mathématiques Paul Painlevé–UMR CNRS 8524, UFR de Mathématiques, Université Lille 1–Sciences et Technologies, 59655 Villeneuve d’Ascq Cédex; Kribs, David W., E-mail: dkribs@uoguelph.ca

    We introduce a generalized framework for private quantum codes using von Neumann algebras and the structure of commutants. This leads naturally to a more general notion of complementary channel, which we use to establish a generalized complementarity theorem between private and correctable subalgebras that applies to both the finite and infinite-dimensional settings. Linear bosonic channels are considered and specific examples of Gaussian quantum channels are given to illustrate the new framework together with the complementarity theorem.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, William E.; Siirola, John Daniel

    We describe new capabilities for modeling MPEC problems within the Pyomo modeling software. These capabilities include new modeling components that represent complementar- ity conditions, modeling transformations for re-expressing models with complementarity con- ditions in other forms, and meta-solvers that apply transformations and numeric optimization solvers to optimize MPEC problems. We illustrate the breadth of Pyomo's modeling capabil- ities for MPEC problems, and we describe how Pyomo's meta-solvers can perform local and global optimization of MPEC problems.

  1. Interpersonal complementarity in the mental health intake: a mixed-methods study.

    PubMed

    Rosen, Daniel C; Miller, Alisa B; Nakash, Ora; Halpern, Lucila; Halperin, Lucila; Alegría, Margarita

    2012-04-01

    The study examined which socio-demographic differences between clients and providers influenced interpersonal complementarity during an initial intake session; that is, behaviors that facilitate harmonious interactions between client and provider. Complementarity was assessed using blinded ratings of 114 videotaped intake sessions by trained observers. Hierarchical linear models were used to examine how match between client and provider in race/ethnicity, sex, and age were associated with levels of complementarity. A qualitative analysis investigated potential mechanisms that accounted for overall complementarity beyond match by examining client-provider dyads in the top and bottom quartiles of the complementarity measure. Results indicated significant interactions between client's race/ethnicity (Black) and provider's race/ethnicity (Latino) (p = .036) and client's age and provider's age (p = .044) on the Affiliation axis. The qualitative investigation revealed that client-provider interactions in the upper quartile of complementarity were characterized by consistent descriptions between the client and provider of concerns and expectations as well as depictions of what was important during the meeting. Results suggest that differences in social identities, although important, may be overcome by interpersonal variables early in the therapeutic relationship. Implications for both clinical practice and future research are discussed, as are factors relevant to working across cultures.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Kuo -Ling; Mehrotra, Sanjay

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  3. Bilevel formulation of a policy design problem considering multiple objectives and incomplete preferences

    NASA Astrophysics Data System (ADS)

    Hawthorne, Bryant; Panchal, Jitesh H.

    2014-07-01

    A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.

  4. Complementarity and Compensation: Bridging the Gap between Writing and Design.

    ERIC Educational Resources Information Center

    Killingsworth, M. Jimmie; Sanders, Scott P.

    1990-01-01

    Outlines two rhetorical principles for producing iconic-mosaic texts--the principle of complementarity and the principle of compensation. Shows how these principles can be applied to practical problems in coordinating the writing and design processes in student projects. (RS)

  5. Modeling and simulation of dynamics of a planar-motion rigid body with friction and surface contact

    NASA Astrophysics Data System (ADS)

    Wang, Xiaojun; Lv, Jing

    2017-07-01

    The modeling and numerical method for the dynamics of a planar-motion rigid body with frictional contact between plane surfaces were presented based on the theory of contact mechanics and the algorithm of linear complementarity problem (LCP). The Coulomb’s dry friction model is adopted as the friction law, and the normal contact forces are expressed as functions of the local deformations and their speeds in contact bodies. The dynamic equations of the rigid body are obtained by the Lagrange equation. The transition problem of stick-slip motions between contact surfaces is formulated and solved as LCP through establishing the complementary conditions of the friction law. Finally, a numerical example is presented as an example to show the application.

  6. An approach of traffic signal control based on NLRSQP algorithm

    NASA Astrophysics Data System (ADS)

    Zou, Yuan-Yang; Hu, Yu

    2017-11-01

    This paper presents a linear program model with linear complementarity constraints (LPLCC) to solve traffic signal optimization problem. The objective function of the model is to obtain the minimization of total queue length with weight factors at the end of each cycle. Then, a combination algorithm based on the nonlinear least regression and sequence quadratic program (NLRSQP) is proposed, by which the local optimal solution can be obtained. Furthermore, four numerical experiments are proposed to study how to set the initial solution of the algorithm that can get a better local optimal solution more quickly. In particular, the results of numerical experiments show that: The model is effective for different arrival rates and weight factors; and the lower bound of the initial solution is, the better optimal solution can be obtained.

  7. Complementarity effects on tree growth are contingent on tree size and climatic conditions across Europe

    PubMed Central

    Madrigal-González, Jaime; Ruiz-Benito, Paloma; Ratcliffe, Sophia; Calatayud, Joaquín; Kändler, Gerald; Lehtonen, Aleksi; Dahlgren, Jonas; Wirth, Christian; Zavala, Miguel A.

    2016-01-01

    Neglecting tree size and stand structure dynamics might bias the interpretation of the diversity-productivity relationship in forests. Here we show evidence that complementarity is contingent on tree size across large-scale climatic gradients in Europe. We compiled growth data of the 14 most dominant tree species in 32,628 permanent plots covering boreal, temperate and Mediterranean forest biomes. Niche complementarity is expected to result in significant growth increments of trees surrounded by a larger proportion of functionally dissimilar neighbours. Functional dissimilarity at the tree level was assessed using four functional types: i.e. broad-leaved deciduous, broad-leaved evergreen, needle-leaved deciduous and needle-leaved evergreen. Using Linear Mixed Models we show that, complementarity effects depend on tree size along an energy availability gradient across Europe. Specifically: (i) complementarity effects at low and intermediate positions of the gradient (coldest-temperate areas) were stronger for small than for large trees; (ii) in contrast, at the upper end of the gradient (warmer regions), complementarity is more widespread in larger than smaller trees, which in turn showed negative growth responses to increased functional dissimilarity. Our findings suggest that the outcome of species mixing on stand productivity might critically depend on individual size distribution structure along gradients of environmental variation. PMID:27571971

  8. Can hydro-economic river basin models simulate water shadow prices under asymmetric access?

    PubMed

    Kuhn, A; Britz, W

    2012-01-01

    Hydro-economic river basin models (HERBM) based on mathematical programming are conventionally formulated as explicit 'aggregate optimization' problems with a single, aggregate objective function. Often unintended, this format implicitly assumes that decisions on water allocation are made via central planning or functioning markets such as to maximize social welfare. In the absence of perfect water markets, however, individually optimal decisions by water users will differ from the social optimum. Classical aggregate HERBMs cannot simulate that situation and thus might be unable to describe existing institutions governing access to water and might produce biased results for alternative ones. We propose a new solution format for HERBMs, based on the format of the mixed complementarity problem (MCP), where modified shadow price relations express spatial externalities resulting from asymmetric access to water use. This new problem format, as opposed to commonly used linear (LP) or non-linear programming (NLP) approaches, enables the simultaneous simulation of numerous 'independent optimization' decisions by multiple water users while maintaining physical interdependences based on water use and flow in the river basin. We show that the alternative problem format allows the formulation HERBMs that yield more realistic results when comparing different water management institutions.

  9. A Portfolio for Optimal Collaboration of Human and Cyber Physical Production Systems in Problem-Solving

    ERIC Educational Resources Information Center

    Ansari, Fazel; Seidenberg, Ulrich

    2016-01-01

    This paper discusses the complementarity of human and cyber physical production systems (CPPS). The discourse of complementarity is elaborated by defining five criteria for comparing the characteristics of human and CPPS. Finally, a management portfolio matrix is proposed for examining the feasibility of optimal collaboration between them. The…

  10. LCP method for a planar passive dynamic walker based on an event-driven scheme

    NASA Astrophysics Data System (ADS)

    Zheng, Xu-Dong; Wang, Qi

    2018-06-01

    The main purpose of this paper is to present a linear complementarity problem (LCP) method for a planar passive dynamic walker with round feet based on an event-driven scheme. The passive dynamic walker is treated as a planar multi-rigid-body system. The dynamic equations of the passive dynamic walker are obtained by using Lagrange's equations of the second kind. The normal forces and frictional forces acting on the feet of the passive walker are described based on a modified Hertz contact model and Coulomb's law of dry friction. The state transition problem of stick-slip between feet and floor is formulated as an LCP, which is solved with an event-driven scheme. Finally, to validate the methodology, four gaits of the walker are simulated: the stance leg neither slips nor bounces; the stance leg slips without bouncing; the stance leg bounces without slipping; the walker stands after walking several steps.

  11. LCP method for a planar passive dynamic walker based on an event-driven scheme

    NASA Astrophysics Data System (ADS)

    Zheng, Xu-Dong; Wang, Qi

    2018-02-01

    The main purpose of this paper is to present a linear complementarity problem (LCP) method for a planar passive dynamic walker with round feet based on an event-driven scheme. The passive dynamic walker is treated as a planar multi-rigid-body system. The dynamic equations of the passive dynamic walker are obtained by using Lagrange's equations of the second kind. The normal forces and frictional forces acting on the feet of the passive walker are described based on a modified Hertz contact model and Coulomb's law of dry friction. The state transition problem of stick-slip between feet and floor is formulated as an LCP, which is solved with an event-driven scheme. Finally, to validate the methodology, four gaits of the walker are simulated: the stance leg neither slips nor bounces; the stance leg slips without bouncing; the stance leg bounces without slipping; the walker stands after walking several steps.

  12. Biological competition: Decision rules, pattern formation, and oscillations

    PubMed Central

    Grossberg, Stephen

    1980-01-01

    Competition solves a universal problem about pattern processing by cellular systems. Competition allows cells to automatically retune their sensitivity to avoid noise and saturation effects. All competitive systems induce decision schemes that permit them to be classified. Systems are identified that achieve global pattern formation, or decision-making, no matter how their parameters are chosen. Oscillations can occur due to contradictions in a system's decision scheme. The pattern formation and oscillation results are extreme examples of a complementarity principle that seems to hold for competitive systems. Nonlinear competitive systems can sometimes appear, to a macroscopic observer, to have linear and cooperative properties, although the two types of systems are not equivalent. This observation is relevant to theories about the evolutionary transition from competitive to cooperative behavior. PMID:16592807

  13. Nonlocality versus complementarity: a conservative approach to the information problem

    NASA Astrophysics Data System (ADS)

    Giddings, Steven B.

    2011-01-01

    A proposal for resolution of the information paradox is that 'nice slice' states, which have been viewed as providing a sharp argument for information loss, do not in fact do so as they do not give a fully accurate description of the quantum state of a black hole. This however leaves an information problem, which is to provide a consistent description of how information escapes when a black hole evaporates. While a rather extreme form of nonlocality has been advocated in the form of complementarity, this paper argues that is not necessary, and more modest nonlocality could solve the information problem. One possible distinguishing characteristic of scenarios is the information retention time. The question of whether such nonlocality implies acausality, and particularly inconsistency, is briefly addressed. The need for such nonlocality, and its apparent tension with our empirical observations of local quantum field theory, may be a critical missing piece in understanding the principles of quantum gravity.

  14. A reduced-order model from high-dimensional frictional hysteresis

    PubMed Central

    Biswas, Saurabh; Chatterjee, Anindya

    2014-01-01

    Hysteresis in material behaviour includes both signum nonlinearities as well as high dimensionality. Available models for component-level hysteretic behaviour are empirical. Here, we derive a low-order model for rate-independent hysteresis from a high-dimensional massless frictional system. The original system, being given in terms of signs of velocities, is first solved incrementally using a linear complementarity problem formulation. From this numerical solution, to develop a reduced-order model, basis vectors are chosen using the singular value decomposition. The slip direction in generalized coordinates is identified as the minimizer of a dissipation-related function. That function includes terms for frictional dissipation through signum nonlinearities at many friction sites. Luckily, it allows a convenient analytical approximation. Upon solution of the approximated minimization problem, the slip direction is found. A final evolution equation for a few states is then obtained that gives a good match with the full solution. The model obtained here may lead to new insights into hysteresis as well as better empirical modelling thereof. PMID:24910522

  15. Higher order sensitivity of solutions to convex programming problems without strict complementarity

    NASA Technical Reports Server (NTRS)

    Malanowski, Kazimierz

    1988-01-01

    Consideration is given to a family of convex programming problems which depend on a vector parameter. It is shown that the solutions of the problems and the associated Lagrange multipliers are arbitrarily many times directionally differentiable functions of the parameter, provided that the data of the problems are sufficiently regular. The characterizations of the respective derivatives are given.

  16. A computational study of the use of an optimization-based method for simulating large multibody systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petra, C.; Gavrea, B.; Anitescu, M.

    2009-01-01

    The present work aims at comparing the performance of several quadratic programming (QP) solvers for simulating large-scale frictional rigid-body systems. Traditional time-stepping schemes for simulation of multibody systems are formulated as linear complementarity problems (LCPs) with copositive matrices. Such LCPs are generally solved by means of Lemke-type algorithms and solvers such as the PATH solver proved to be robust. However, for large systems, the PATH solver or any other pivotal algorithm becomes unpractical from a computational point of view. The convex relaxation proposed by one of the authors allows the formulation of the integration step as a QPD, for whichmore » a wide variety of state-of-the-art solvers are available. In what follows we report the results obtained solving that subproblem when using the QP solvers MOSEK, OOQP, TRON, and BLMVM. OOQP is presented with both the symmetric indefinite solver MA27 and our Cholesky reformulation using the CHOLMOD package. We investigate computational performance and address the correctness of the results from a modeling point of view. We conclude that the OOQP solver, particularly with the CHOLMOD linear algebra solver, has predictable performance and memory use patterns and is far more competitive for these problems than are the other solvers.« less

  17. An interactive approach based on a discrete differential evolution algorithm for a class of integer bilevel programming problems

    NASA Astrophysics Data System (ADS)

    Li, Hong; Zhang, Li; Jiao, Yong-Chang

    2016-07-01

    This paper presents an interactive approach based on a discrete differential evolution algorithm to solve a class of integer bilevel programming problems, in which integer decision variables are controlled by an upper-level decision maker and real-value or continuous decision variables are controlled by a lower-level decision maker. Using the Karush--Kuhn-Tucker optimality conditions in the lower-level programming, the original discrete bilevel formulation can be converted into a discrete single-level nonlinear programming problem with the complementarity constraints, and then the smoothing technique is applied to deal with the complementarity constraints. Finally, a discrete single-level nonlinear programming problem is obtained, and solved by an interactive approach. In each iteration, for each given upper-level discrete variable, a system of nonlinear equations including the lower-level variables and Lagrange multipliers is solved first, and then a discrete nonlinear programming problem only with inequality constraints is handled by using a discrete differential evolution algorithm. Simulation results show the effectiveness of the proposed approach.

  18. Shape Complementarity of Protein-Protein Complexes at Multiple Resolutions

    PubMed Central

    Zhang, Qing; Sanner, Michel; Olson, Arthur J.

    2010-01-01

    Biological complexes typically exhibit intermolecular interfaces of high shape complementarity. Many computational docking approaches use this surface complementarity as a guide in the search for predicting the structures of protein-protein complexes. Proteins often undergo conformational changes in order to create a highly complementary interface when associating. These conformational changes are a major cause of failure for automated docking procedures when predicting binding modes between proteins using their unbound conformations. Low resolution surfaces in which high frequency geometric details are omitted have been used to address this problem. These smoothed, or blurred, surfaces are expected to minimize the differences between free and bound structures, especially those that are due to side chain conformations or small backbone deviations. In spite of the fact that this approach has been used in many docking protocols, there has yet to be a systematic study of the effects of such surface smoothing on the shape complementarity of the resulting interfaces. Here we investigate this question by computing shape complementarity of a set of 66 protein-protein complexes represented by multi-resolution blurred surfaces. Complexed and unbound structures are available for these protein-protein complexes. They are a subset of complexes from a non-redundant docking benchmark selected for rigidity (i.e. the proteins undergo limited conformational changes between their bound and unbound states). In this work we construct the surfaces by isocontouring a density map obtained by accumulating the densities of Gaussian functions placed at all atom centers of the molecule. The smoothness or resolution is specified by a Gaussian fall-off coefficient, termed “blobbyness”. Shape complementarity is quantified using a histogram of the shortest distances between two proteins' surface mesh vertices for both the crystallographic complexes and the complexes built using the protein structures in their unbound conformation. The histograms calculated for the bound complex structures demonstrate that medium resolution smoothing (blobbyness=−0.9) can reproduce about 88% of the shape complementarity of atomic resolution surfaces. Complexes formed from the free component structures show a partial loss of shape complementarity (more overlaps and gaps) with the atomic resolution surfaces. For surfaces smoothed to low resolution (blobbyness=−0.3), we find more consistency of shape complementarity between the complexed and free cases. To further reduce bad contacts without significantly impacting the good contacts we introduce another blurred surface, in which the Gaussian densities of flexible atoms are reduced. From these results we discuss the use of shape complementarity in protein-protein docking. PMID:18837463

  19. Bohrian Complementarity in the Light of Kantian Teleology

    NASA Astrophysics Data System (ADS)

    Pringe, Hernán

    2014-03-01

    The Kantian influences on Bohr's thought and the relationship between the perspective of complementarity in physics and in biology seem at first sight completely unrelated issues. However, the goal of this work is to show their intimate connection. We shall see that Bohr's views on biology shed light on Kantian elements of his thought, which enables a better understanding of his complementary interpretation of quantum theory. For this purpose, we shall begin by discussing Bohr's views on the analogies concerning the epistemological situation in biology and in physics. Later, we shall compare the Bohrian and the Kantian approaches to the science of life in order to show their close connection. On this basis, we shall finally turn to the issue of complementarity in quantum theory in order to assess what we can learn about the epistemological problems in the quantum realm from a consideration of Kant's views on teleology.

  20. Reciprocal and Complementary Sibling Interactions: Relations with Socialization Outcomes in the Kindergarten Classroom

    PubMed Central

    Harrist, Amanda W.; Achacoso, Joseph A.; John, Aesha; Pettit, Gregory S.; Bates, John E.; Dodge, Kenneth A.

    2013-01-01

    Research Findings To examine associations between sibling interaction patterns and later social outcomes in single- and two-parent families, 113 kindergarteners took part in naturalistic observations at home with siblings, classmates participated in sociometric interviews, and teachers completed behavior ratings. Sibling interactions were coded using a newly-developed 39-item checklist, and proportions of complementary and reciprocal sibling interactions computed. Complementarity occurred more among dyads where kindergartners were with toddler or infant siblings than among kindergartners with older or near-age younger siblings. Higher levels of complementarity predicted lower levels of internalizing but were not related to externalizing problems. Kindergartners’ sociometric status in the classroom differed as a function of sibling interaction patterns, with neglected and controversial children experiencing less complementarity/more reciprocity than popular, average, and rejected children. Finally, there was some evidence for differential associations of sibling interaction patterns with social outcomes for children in single- versus two-parent families: regressions testing interaction effects show sibling reciprocity positively associated with kindergartners’ social skills only in single-parent families, and complementary sibling interactions positively related to internalizing problems only in two-parent families. Implications for Practice Those working with divorcing or other single-parent families might consider sibling interactions as a potential target for social skill building. PMID:26005311

  1. The Role of Shape Complementarity in the Protein-Protein Interactions

    PubMed Central

    Li, Ye; Zhang, Xianren; Cao, Dapeng

    2013-01-01

    We use a dissipative particle dynamic simulation to investigate the effects of shape complementarity on the protein-protein interactions. By monitoring different kinds of protein shape-complementarity modes, we gave a clear mechanism to reveal the role of the shape complementarity in the protein-protein interactions, i.e., when the two proteins with shape complementarity approach each other, the conformation of lipid chains between two proteins would be restricted significantly. The lipid molecules tend to leave the gap formed by two proteins to maximize the configuration entropy, and therefore yield an effective entropy-induced protein-protein attraction, which enhances the protein aggregation. In short, this work provides an insight into understanding the importance of the shape complementarity in the protein-protein interactions especially for protein aggregation and antibody–antigen complexes. Definitely, the shape complementarity is the third key factor affecting protein aggregation and complex, besides the electrostatic-complementarity and hydrophobic complementarity. PMID:24253561

  2. The Role of Shape Complementarity in the Protein-Protein Interactions

    NASA Astrophysics Data System (ADS)

    Li, Ye; Zhang, Xianren; Cao, Dapeng

    2013-11-01

    We use a dissipative particle dynamic simulation to investigate the effects of shape complementarity on the protein-protein interactions. By monitoring different kinds of protein shape-complementarity modes, we gave a clear mechanism to reveal the role of the shape complementarity in the protein-protein interactions, i.e., when the two proteins with shape complementarity approach each other, the conformation of lipid chains between two proteins would be restricted significantly. The lipid molecules tend to leave the gap formed by two proteins to maximize the configuration entropy, and therefore yield an effective entropy-induced protein-protein attraction, which enhances the protein aggregation. In short, this work provides an insight into understanding the importance of the shape complementarity in the protein-protein interactions especially for protein aggregation and antibody-antigen complexes. Definitely, the shape complementarity is the third key factor affecting protein aggregation and complex, besides the electrostatic-complementarity and hydrophobic complementarity.

  3. Saptio-temporal complementarity of wind and solar power in India

    NASA Astrophysics Data System (ADS)

    Lolla, Savita; Baidya Roy, Somnath; Chowdhury, Sourangshu

    2015-04-01

    Wind and solar power are likely to be a part of the solution to the climate change problem. That is why they feature prominently in the energy policies of all industrial economies including India. One of the major hindrances that is preventing an explosive growth of wind and solar energy is the issue of intermittency. This is a major problem because in a rapidly moving economy, energy production must match the patterns of energy demand. Moreover, sudden increase and decrease in energy supply may destabilize the power grids leading to disruptions in power supply. In this work we explore if the patterns of variability in wind and solar energy availability can offset each other so that a constant supply can be guaranteed. As a first step, this work focuses on seasonal-scale variability for each of the 5 regional power transmission grids in India. Communication within each grid is better than communication between grids. Hence, it is assumed that the grids can switch sources relatively easily. Wind and solar resources are estimated using the MERRA Reanalysis data for the 1979-2013 period. Solar resources are calculated with a 20% conversion efficiency. Wind resources are estimated using a 2 MW turbine power curve. Total resources are obtained by optimizing location and number of wind/solar energy farms. Preliminary results show that the southern and western grids are more appropriate for cogeneration than the other grids. Many studies on wind-solar cogeneration have focused on temporal complementarity at local scale. However, this is one of the first studies to explore spatial complementarity over regional scales. This project may help accelerate renewable energy penetration in India by identifying regional grid(s) where the renewable energy intermittency problem can be minimized.

  4. Complementarity As Generative Principle: A Thought Pattern for Aesthetic Appreciations and Cognitive Appraisals in General

    PubMed Central

    Bao, Yan; von Stosch, Alexandra; Park, Mona; Pöppel, Ernst

    2017-01-01

    In experimental aesthetics the relationship between the arts and cognitive neuroscience has gained particular interest in recent years. But has cognitive neuroscience indeed something to offer when studying the arts? Here we present a theoretical frame within which the concept of complementarity as a generative or creative principle is proposed; neurocognitive processes are characterized by the duality of complementary activities like bottom-up and top-down control, or logistical functions like temporal control and content functions like perceptions in the neural machinery. On that basis a thought pattern is suggested for aesthetic appreciations and cognitive appraisals in general. This thought pattern is deeply rooted in the history of philosophy and art theory since antiquity; and complementarity also characterizes neural operations as basis for cognitive processes. We then discuss some challenges one is confronted with in experimental aesthetics; in our opinion, one serious problem is the lack of a taxonomy of functions in psychology and neuroscience which is generally accepted. This deficit makes it next to impossible to develop acceptable models which are similar to what has to be modeled. Another problem is the severe language bias in this field of research as knowledge gained in many languages over the ages remains inaccessible to most scientists. Thus, an inspection of research results or theoretical concepts is necessarily too narrow. In spite of these limitations we provide a selective summary of some results and viewpoints with a focus on visual art and its appreciation. It is described how questions of art and aesthetic appreciations using behavioral methods and in particular brain-imaging techniques are analyzed and evaluated focusing on such issues like the representation of artwork or affective experiences. Finally, we emphasize complementarity as a generative principle on a practical level when artists and scientists work directly together which can lead to new insights and broader perspectives on both sides. PMID:28536548

  5. Complementarity As Generative Principle: A Thought Pattern for Aesthetic Appreciations and Cognitive Appraisals in General.

    PubMed

    Bao, Yan; von Stosch, Alexandra; Park, Mona; Pöppel, Ernst

    2017-01-01

    In experimental aesthetics the relationship between the arts and cognitive neuroscience has gained particular interest in recent years. But has cognitive neuroscience indeed something to offer when studying the arts? Here we present a theoretical frame within which the concept of complementarity as a generative or creative principle is proposed; neurocognitive processes are characterized by the duality of complementary activities like bottom-up and top-down control, or logistical functions like temporal control and content functions like perceptions in the neural machinery. On that basis a thought pattern is suggested for aesthetic appreciations and cognitive appraisals in general. This thought pattern is deeply rooted in the history of philosophy and art theory since antiquity; and complementarity also characterizes neural operations as basis for cognitive processes. We then discuss some challenges one is confronted with in experimental aesthetics; in our opinion, one serious problem is the lack of a taxonomy of functions in psychology and neuroscience which is generally accepted. This deficit makes it next to impossible to develop acceptable models which are similar to what has to be modeled. Another problem is the severe language bias in this field of research as knowledge gained in many languages over the ages remains inaccessible to most scientists. Thus, an inspection of research results or theoretical concepts is necessarily too narrow. In spite of these limitations we provide a selective summary of some results and viewpoints with a focus on visual art and its appreciation. It is described how questions of art and aesthetic appreciations using behavioral methods and in particular brain-imaging techniques are analyzed and evaluated focusing on such issues like the representation of artwork or affective experiences. Finally, we emphasize complementarity as a generative principle on a practical level when artists and scientists work directly together which can lead to new insights and broader perspectives on both sides.

  6. Electrostatic complementarity between proteins and ligands. 1. Charge disposition, dielectric and interface effects

    NASA Astrophysics Data System (ADS)

    Chau, P.-L.; Dean, P. M.

    1994-10-01

    Electrostatic interactions have always been considered an important factor governing ligand-receptor interactions. Previous work in this field has established the existence of electrostatic complementarity between the ligand and its receptor site. However, this property has not been treated rigorously, and the description remains largely qualitative. In this work, 34 data sets of high quality were chosen from the Brookhaven Protein Databank. The electrostatic complementarity has been calculated between the surface potentials; complementarity is absent between adjacent or neighbouring atoms of the ligand and the receptor. There is little difference between complementarities on the total ligand surface and the interfacial region. Altering the homogeneous dielectric to distance-dependent dielectrics reduces the complementarity slightly, but does not affect the pattern of complementarity.

  7. Biofuel supply chain, market, and policy analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Leilei

    Renewable fuel is receiving an increasing attention as a substitute for fossil based energy. The US Department of Energy (DOE) has employed increasing effort on promoting the advanced biofuel productions. Although the advanced biofuel remains at its early stage, it is expected to play an important role in climate policy in the future in the transportation sector. This dissertation studies the emerging biofuel supply chain and markets by analyzing the production cost, and the outcomes of the biofuel market, including blended fuel market price and quantity, biofuel contract price and quantity, profitability of each stakeholder (farmers, biofuel producers, biofuel blenders) in the market. I also address government policy impacts on the emerging biofuel market. The dissertation is composed with three parts, each in a paper format. The first part studies the supply chain of emerging biofuel industry. Two optimization-based models are built to determine the number of facilities to deploy, facility locations, facility capacities, and operational planning within facilities. Cost analyses have been conducted under a variety of biofuel demand scenarios. It is my intention that this model will shed light on biofuel supply chain design considering operational planning under uncertain demand situations. The second part of the dissertation work focuses on analyzing the interaction between the key stakeholders along the supply chain. A bottom-up equilibrium model is built for the emerging biofuel market to study the competition in the advanced biofuel market, explicitly formulating the interactions between farmers, biofuel producers, blenders, and consumers. The model simulates the profit maximization of multiple market entities by incorporating their competitive decisions in farmers' land allocation, biomass transportation, biofuel production, and biofuel blending. As such, the equilibrium model is capable of and appropriate for policy analysis, especially for those policies that have complex ramifications and result in sophisticate interactions among multiple stakeholders. The third part of the dissertation investigates the impacts of flexible fuel vehicles (FFVs) market penetration levels on the market outcomes, including cellulosic biofuel production and price, blended fuel market price, and profitability of each stakeholder in the biofuel supply chain for imperfectly competitive biofuel markets. In this paper, I investigate the penetration levels of FFVs by incorporating the substitution among different fuels in blended fuel demand functions through "cross price elasticity" in a bottom-up equilibrium model framework. The complementarity based problem is solved by a Taylor expansion-based iterative procedure. At each step of the iteration, the highly nonlinear complementarity problems with constant elasticity of demand functions are linearized into linear complimentarity problems and solved until it converges. This model can be applied to investigate the interaction between the stakeholders in the biofuel market, and to assist decision making for both cellulosic biofuel investors and government.

  8. Bioethical pluralism and complementarity.

    PubMed

    Grinnell, Frederick; Bishop, Jeffrey P; McCullough, Laurence B

    2002-01-01

    This essay presents complementarity as a novel feature of bioethical pluralism. First introduced by Neils Bohr in conjunction with quantum physics, complementarity in bioethics occurs when different perspectives account for equally important features of a situation but are mutually exclusive. Unlike conventional approaches to bioethical pluralism, which attempt in one fashion or another to isolate and choose between different perspectives, complementarity accepts all perspectives. As a result, complementarity results in a state of holistic, dynamic tension, rather than one that yields singular or final moral judgments.

  9. The 'hard problem' and the quantum physicists. Part 1: the first generation.

    PubMed

    Smith, C U M

    2006-07-01

    All four of the most important figures in the early twentieth-century development of quantum physics-Niels Bohr, Erwin Schroedinger, Werner Heisenberg and Wolfgang Pauli-had strong interests in the traditional mind-brain, or 'hard,' problem. This paper reviews their approach to this problem, showing the influence of Bohr's complementarity thesis, the significance of Schroedinger's small book, 'What is life?,' the updated Platonism of Heisenberg and, perhaps most interesting of all, the interaction of Carl Jung and Wolfgang Pauli in the latter's search for a unification of mind and matter.

  10. Toward a Multiple Perspective in Family Theory and Practice: The Case of Social Exchange Theory, Symbolic Interactionism, and Conflict Theory.

    ERIC Educational Resources Information Center

    Rank, Mark R.; LeCroy, Craig W.

    1983-01-01

    Examines the complementarity of three often-used theories in family research: social exchange theory, symbolic interactionism, and conflict theory. Provides a case example in which a multiple perspective is applied to a problem of marital discord. Discusses implications for the clinician. (Author/WAS)

  11. Electrostatic complementarity between proteins and ligands. 1. Charge disposition, dielectric and interface effects.

    PubMed

    Chau, P L; Dean, P M

    1994-10-01

    Electrostatic interactions have always been considered an important factor governing ligand-receptor interactions. Previous work in this field has established the existence of electrostatic complementarity between the ligand and its receptor site. However, this property has not been treated rigorously, and the description remains largely qualitative. In this work, 34 data sets of high quality were chosen from the Brookhaven Protein Databank. The electrostatic complementary has been calculated between the surface potentials; complementarity is absent between adjacent or neighbouring atoms of the ligand and the receptor. There is little difference between complementarities on the total ligand surface and the interfacial region. Altering the homogeneous dielectric to distance-dependent dielectrics reduces the complementarity slightly, but does not affect the pattern of complementarity.

  12. Multidimensional entropic uncertainty relation based on a commutator matrix in position and momentum spaces

    NASA Astrophysics Data System (ADS)

    Hertz, Anaelle; Vanbever, Luc; Cerf, Nicolas J.

    2018-01-01

    The uncertainty relation for continuous variables due to Byałinicki-Birula and Mycielski [I. Białynicki-Birula and J. Mycielski, Commun. Math. Phys. 44, 129 (1975), 10.1007/BF01608825] expresses the complementarity between two n -tuples of canonically conjugate variables (x1,x2,...,xn) and (p1,p2,...,pn) in terms of Shannon differential entropy. Here we consider the generalization to variables that are not canonically conjugate and derive an entropic uncertainty relation expressing the balance between any two n -variable Gaussian projective measurements. The bound on entropies is expressed in terms of the determinant of a matrix of commutators between the measured variables. This uncertainty relation also captures the complementarity between any two incompatible linear canonical transforms, the bound being written in terms of the corresponding symplectic matrices in phase space. Finally, we extend this uncertainty relation to Rényi entropies and also prove a covariance-based uncertainty relation which generalizes the Robertson relation.

  13. Solving multi-leader-common-follower games.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leyffer, S.; Munson, T.; Mathematics and Computer Science

    Multi-leader-common-follower games arise when modelling two or more competitive firms, the leaders, that commit to their decisions prior to another group of competitive firms, the followers, that react to the decisions made by the leaders. These problems lead in a natural way to equilibrium problems with equilibrium constraints (EPECs). We develop a characterization of the solution sets for these problems and examine a variety of nonlinear optimization and nonlinear complementarity formulations of EPECs. We distinguish two broad cases: problems where the leaders can cost-differentiate and problems with price-consistent followers. We demonstrate the practical viability of our approach by solving amore » range of medium-sized test problems.« less

  14. Enhanced Anion Transport Using Some Expanded Porphyrins as Carriers.

    DTIC Science & Technology

    1991-01-01

    is able to bind a smaller chemical species. The substrate is the specie whose binding is being sought. It can be neutral as well as charged , such as a...34ligand- protein -central metal cation-guest anion" ternary interactions. 6 To date, non-biological, synthetically made polyammonium macrocycles and... complementarity between these spherical anions and the ellipsoidal cavity of 6-6H+ . The cavity of the bis-tren receptor is best suited for the linear

  15. Birth-Order Complementarity and Marital Adjustment.

    ERIC Educational Resources Information Center

    Vos, Cornelia J. Vanderkooy; Hayden, Delbert J.

    1985-01-01

    Tested the influence of birth-order complementarity on marital adjustment among 327 married women using the Spanier Dyadic Adjustment Scale (1976). Birth-order complementarity was found to be unassociated with marital adjustment. (Author/BL)

  16. Distributed synchronization control of complex networks with communication constraints.

    PubMed

    Xu, Zhenhua; Zhang, Dan; Song, Hongbo

    2016-11-01

    This paper is concerned with the distributed synchronization control of complex networks with communication constraints. In this work, the controllers communicate with each other through the wireless network, acting as a controller network. Due to the constrained transmission power, techniques such as the packet size reduction and transmission rate reduction schemes are proposed which could help reduce communication load of the controller network. The packet dropout problem is also considered in the controller design since it is often encountered in networked control systems. We show that the closed-loop system can be modeled as a switched system with uncertainties and random variables. By resorting to the switched system approach and some stochastic system analysis method, a new sufficient condition is firstly proposed such that the exponential synchronization is guaranteed in the mean-square sense. The controller gains are determined by using the well-known cone complementarity linearization (CCL) algorithm. Finally, a simulation study is performed, which demonstrates the effectiveness of the proposed design algorithm. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Complementarity and Area-Efficiency in the Prioritization of the Global Protected Area Network.

    PubMed

    Kullberg, Peter; Toivonen, Tuuli; Montesino Pouzols, Federico; Lehtomäki, Joona; Di Minin, Enrico; Moilanen, Atte

    2015-01-01

    Complementarity and cost-efficiency are widely used principles for protected area network design. Despite the wide use and robust theoretical underpinnings, their effects on the performance and patterns of priority areas are rarely studied in detail. Here we compare two approaches for identifying the management priority areas inside the global protected area network: 1) a scoring-based approach, used in recently published analysis and 2) a spatial prioritization method, which accounts for complementarity and area-efficiency. Using the same IUCN species distribution data the complementarity method found an equal-area set of priority areas with double the mean species ranges covered compared to the scoring-based approach. The complementarity set also had 72% more species with full ranges covered, and lacked any coverage only for half of the species compared to the scoring approach. Protected areas in our complementarity-based solution were on average smaller and geographically more scattered. The large difference between the two solutions highlights the need for critical thinking about the selected prioritization method. According to our analysis, accounting for complementarity and area-efficiency can lead to considerable improvements when setting management priorities for the global protected area network.

  18. Affiliation and control in marital interaction: interpersonal complementarity is present but is not associated with affect or relationship quality.

    PubMed

    Cundiff, Jenny M; Smith, Timothy W; Butner, Jonathan; Critchfield, Kenneth L; Nealey-Moore, Jill

    2015-01-01

    The principle of complementarity in interpersonal theory states that an actor's behavior tends to "pull, elicit, invite, or evoke" responses from interaction partners who are similar in affiliation (i.e., warmth vs. hostility) and opposite in control (i.e., dominance vs. submissiveness). Furthermore, complementary interactions are proposed to evoke less negative affect and promote greater relationship satisfaction. These predictions were examined in two studies of married couples. Results suggest that complementarity in affiliation describes a robust general pattern of marital interaction, but complementarity in control varies across contexts. Consistent with behavioral models of marital interaction, greater levels of affiliation and lower control by partners-not complementarity in affiliation or control-were associated with less anger and anxiety and greater relationship quality. Partners' levels of affiliation and control combined in ways other than complementarity-mostly additively, but sometimes synergistically-to predict negative affect and relationship satisfaction. © 2014 by the Society for Personality and Social Psychology, Inc.

  19. Interpersonal Complementarity in the Mental Health Intake: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Rosen, Daniel C.; Miller, Alisa B.; Nakash, Ora; Halperin, Lucila; Alegria, Margarita

    2012-01-01

    The study examined which socio-demographic differences between clients and providers influenced interpersonal complementarity during an initial intake session; that is, behaviors that facilitate harmonious interactions between client and provider. Complementarity was assessed using blinded ratings of 114 videotaped intake sessions by trained…

  20. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    NASA Astrophysics Data System (ADS)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using a DC load flow approximation). Chapter 9 shows the price results. In contrast to prior market power simulations of these markets, much greater variability in price-cost margins is found when using a realistic model of hourly conditions on such a large network. Chapter 10 shows that the conventional concentration indices (HHIs) are poorly correlated with PCMs. Finally, Chapter 11 proposes that the simulation models are applied to merger analysis and provides two large-scale merger examples. (Abstract shortened by UMI.)

  1. Electrostatic complementarity at protein/protein interfaces.

    PubMed

    McCoy, A J; Chandana Epa, V; Colman, P M

    1997-05-02

    Calculation of the electrostatic potential of protein-protein complexes has led to the general assertion that protein-protein interfaces display "charge complementarity" and "electrostatic complementarity". In this study, quantitative measures for these two terms are developed and used to investigate protein-protein interfaces in a rigorous manner. Charge complementarity (CC) was defined using the correlation of charges on nearest neighbour atoms at the interface. All 12 protein-protein interfaces studied had insignificantly small CC values. Therefore, the term charge complementarity is not appropriate for the description of protein-protein interfaces when used in the sense measured by CC. Electrostatic complementarity (EC) was defined using the correlation of surface electrostatic potential at protein-protein interfaces. All twelve protein-protein interfaces studied had significant EC values, and thus the assertion that protein-protein association involves surfaces with complementary electrostatic potential was substantially confirmed. The term electrostatic complementarity can therefore be used to describe protein-protein interfaces when used in the sense measured by EC. Taken together, the results for CC and EC demonstrate the relevance of the long-range effects of charges, as described by the electrostatic potential at the binding interface. The EC value did not partition the complexes by type such as antigen-antibody and proteinase-inhibitor, as measures of the geometrical complementarity at protein-protein interfaces have done. The EC value was also not directly related to the number of salt bridges in the interface, and neutralisation of these salt bridges showed that other charges also contributed significantly to electrostatic complementarity and electrostatic interactions between the proteins. Electrostatic complementarity as defined by EC was extended to investigate the electrostatic similarity at the surface of influenza virus neuraminidase where the epitopes of two monoclonal antibodies, NC10 and NC41, overlap. Although NC10 and NC41 both have quite high values of EC for their interaction with neuraminidase, the similarity in electrostatic potential generated by the two on the overlapping region of the epitopes is insignificant. Thus, it is possible for two antibodies to recognise the electrostatic surface of a protein in dissimilar ways.

  2. Computational sequence analysis of predicted long dsRNA transcriptomes of major crops reveals sequence complementarity with human genes.

    PubMed

    Jensen, Peter D; Zhang, Yuanji; Wiggins, B Elizabeth; Petrick, Jay S; Zhu, Jin; Kerstetter, Randall A; Heck, Gregory R; Ivashuta, Sergey I

    2013-01-01

    Long double-stranded RNAs (long dsRNAs) are precursors for the effector molecules of sequence-specific RNA-based gene silencing in eukaryotes. Plant cells can contain numerous endogenous long dsRNAs. This study demonstrates that such endogenous long dsRNAs in plants have sequence complementarity to human genes. Many of these complementary long dsRNAs have perfect sequence complementarity of at least 21 nucleotides to human genes; enough complementarity to potentially trigger gene silencing in targeted human cells if delivered in functional form. However, the number and diversity of long dsRNA molecules in plant tissue from crops such as lettuce, tomato, corn, soy and rice with complementarity to human genes that have a long history of safe consumption supports a conclusion that long dsRNAs do not present a significant dietary risk.

  3. Beam tracking phase tomography with laboratory sources

    NASA Astrophysics Data System (ADS)

    Vittoria, F. A.; Endrizzi, M.; Kallon, G. K. N.; Hagen, C. K.; Diemoz, P. C.; Zamir, A.; Olivo, A.

    2018-04-01

    An X-ray phase-contrast laboratory system is presented, based on the beam-tracking method. Beam-tracking relies on creating micro-beamlets of radiation by placing a structured mask before the sample, and analysing them by using a detector with sufficient resolution. The system is used in tomographic configuration to measure the three dimensional distribution of the linear attenuation coefficient, difference from unity of the real part of the refractive index, and of the local scattering power of specimens. The complementarity of the three signals is investigated, together with their potential use for material discrimination.

  4. Regularized iterative integration combined with non-linear diffusion filtering for phase-contrast x-ray computed tomography.

    PubMed

    Burger, Karin; Koehler, Thomas; Chabior, Michael; Allner, Sebastian; Marschner, Mathias; Fehringer, Andreas; Willner, Marian; Pfeiffer, Franz; Noël, Peter

    2014-12-29

    Phase-contrast x-ray computed tomography has a high potential to become clinically implemented because of its complementarity to conventional absorption-contrast.In this study, we investigate noise-reducing but resolution-preserving analytical reconstruction methods to improve differential phase-contrast imaging. We apply the non-linear Perona-Malik filter on phase-contrast data prior or post filtered backprojected reconstruction. Secondly, the Hilbert kernel is replaced by regularized iterative integration followed by ramp filtered backprojection as used for absorption-contrast imaging. Combining the Perona-Malik filter with this integration algorithm allows to successfully reveal relevant sample features, quantitatively confirmed by significantly increased structural similarity indices and contrast-to-noise ratios. With this concept, phase-contrast imaging can be performed at considerably lower dose.

  5. Non-linear aeroelastic prediction for aircraft applications

    NASA Astrophysics Data System (ADS)

    de C. Henshaw, M. J.; Badcock, K. J.; Vio, G. A.; Allen, C. B.; Chamberlain, J.; Kaynes, I.; Dimitriadis, G.; Cooper, J. E.; Woodgate, M. A.; Rampurawala, A. M.; Jones, D.; Fenwick, C.; Gaitonde, A. L.; Taylor, N. V.; Amor, D. S.; Eccles, T. A.; Denley, C. J.

    2007-05-01

    Current industrial practice for the prediction and analysis of flutter relies heavily on linear methods and this has led to overly conservative design and envelope restrictions for aircraft. Although the methods have served the industry well, it is clear that for a number of reasons the inclusion of non-linearity in the mathematical and computational aeroelastic prediction tools is highly desirable. The increase in available and affordable computational resources, together with major advances in algorithms, mean that non-linear aeroelastic tools are now viable within the aircraft design and qualification environment. The Partnership for Unsteady Methods in Aerodynamics (PUMA) Defence and Aerospace Research Partnership (DARP) was sponsored in 2002 to conduct research into non-linear aeroelastic prediction methods and an academic, industry, and government consortium collaborated to address the following objectives: To develop useable methodologies to model and predict non-linear aeroelastic behaviour of complete aircraft. To evaluate the methodologies on real aircraft problems. To investigate the effect of non-linearities on aeroelastic behaviour and to determine which have the greatest effect on the flutter qualification process. These aims have been very effectively met during the course of the programme and the research outputs include: New methods available to industry for use in the flutter prediction process, together with the appropriate coaching of industry engineers. Interesting results in both linear and non-linear aeroelastics, with comprehensive comparison of methods and approaches for challenging problems. Additional embryonic techniques that, with further research, will further improve aeroelastics capability. This paper describes the methods that have been developed and how they are deployable within the industrial environment. We present a thorough review of the PUMA aeroelastics programme together with a comprehensive review of the relevant research in this domain. This is set within the context of a generic industrial process and the requirements of UK and US aeroelastic qualification. A range of test cases, from simple small DOF cases to full aircraft, have been used to evaluate and validate the non-linear methods developed and to make comparison with the linear methods in everyday use. These have focused mainly on aerodynamic non-linearity, although some results for structural non-linearity are also presented. The challenges associated with time domain (coupled computational fluid dynamics-computational structural model (CFD-CSM)) methods have been addressed through the development of grid movement, fluid-structure coupling, and control surface movement technologies. Conclusions regarding the accuracy and computational cost of these are presented. The computational cost of time-domain methods, despite substantial improvements in efficiency, remains high. However, significant advances have been made in reduced order methods, that allow non-linear behaviour to be modelled, but at a cost comparable with that of the regular linear methods. Of particular note is a method based on Hopf bifurcation that has reached an appropriate maturity for deployment on real aircraft configurations, though only limited results are presented herein. Results are also presented for dynamically linearised CFD approaches that hold out the possibility of non-linear results at a fraction of the cost of time coupled CFD-CSM methods. Local linearisation approaches (higher order harmonic balance and continuation method) are also presented; these have the advantage that no prior assumption of the nature of the aeroelastic instability is required, but currently these methods are limited to low DOF problems and it is thought that these will not reach a level of maturity appropriate to real aircraft problems for some years to come. Nevertheless, guidance on the most likely approaches has been derived and this forms the basis for ongoing research. It is important to recognise that the aeroelastic design and qualification requires a variety of methods applicable at different stages of the process. The methods reported herein are mapped to the process, so that their applicability and complementarity may be understood. Overall, the programme has provided a suite of methods that allow realistic consideration of non-linearity in the aeroelastic design and qualification of aircraft. Deployment of these methods is underway in the industrial environment, but full realisation of the benefit of these approaches will require appropriate engagement with the standards community so that safety standards may take proper account of the inclusion of non-linearity.

  6. Complementarity of genuine multipartite Bell nonlocality

    NASA Astrophysics Data System (ADS)

    Sami, Sasha; Chakrabarty, Indranil; Chaturvedi, Anubhav

    2017-08-01

    We introduce a feature of no-signaling (Bell) nonlocal theories: namely, when a system of multiple parties manifests genuine nonlocal correlation, then there cannot be arbitrarily high nonlocal correlation among any subset of the parties. We call this feature complementarity of genuine multipartite nonlocality. We use Svetlichny's criterion for genuine multipartite nonlocality and nonlocal games to derive the complementarity relations under no-signaling constraints. We find that the complementarity relations are tightened for the much stricter quantum constraints. We compare this notion with the well-known notion of monogamy of nonlocality. As a consequence, we obtain tighter nontrivial monogamy relations that take into account genuine multipartite nonlocality. Furthermore, we provide numerical evidence showcasing this feature using a bipartite measure and several other well-known tripartite measures of nonlocality.

  7. Drinkers and bettors: investigating the complementarity of alcohol consumption and problem gambling.

    PubMed

    French, Michael T; Maclean, Johanna Catherine; Ettner, Susan L

    2008-07-01

    Regulated gambling is a multi-billion dollar industry in the United States with greater than 100% increases in revenue over the past decade. Along with this rise in gambling popularity and gaming options comes an increased risk of addiction and the associated social costs. This paper focuses on the effect of alcohol use on gambling-related problems. Variables correlated with both alcohol use and gambling may be difficult to observe, and the inability to include these items in empirical models may bias coefficient estimates. After addressing the endogeneity of alcohol use when appropriate, we find strong evidence that problematic gambling and alcohol consumption are complementary activities.

  8. The Black Hole Information Problem

    NASA Astrophysics Data System (ADS)

    Polchinski, Joseph

    The black hole information problem has been a challenge since Hawking's original 1975 paper. It led to the discovery of AdS/CFT, which gave a partial resolution of the paradox. However, recent developments, in particular the firewall puzzle, show that there is much that we do not understand. I review the black hole, Hawking radiation, and the Page curve, and the classic form of the paradox. I discuss AdS/CFT as a partial resolution. I then discuss black hole complementarity and its limitations, leading to many proposals for different kinds of `drama.' I conclude with some recent ideas. Presented at the 2014-15 Jerusalem Winter School and the 2015 TASI.

  9. Self-Complementarity within Proteins: Bridging the Gap between Binding and Folding

    PubMed Central

    Basu, Sankar; Bhattacharyya, Dhananjay; Banerjee, Rahul

    2012-01-01

    Complementarity, in terms of both shape and electrostatic potential, has been quantitatively estimated at protein-protein interfaces and used extensively to predict the specific geometry of association between interacting proteins. In this work, we attempted to place both binding and folding on a common conceptual platform based on complementarity. To that end, we estimated (for the first time to our knowledge) electrostatic complementarity (Em) for residues buried within proteins. Em measures the correlation of surface electrostatic potential at protein interiors. The results show fairly uniform and significant values for all amino acids. Interestingly, hydrophobic side chains also attain appreciable complementarity primarily due to the trajectory of the main chain. Previous work from our laboratory characterized the surface (or shape) complementarity (Sm) of interior residues, and both of these measures have now been combined to derive two scoring functions to identify the native fold amid a set of decoys. These scoring functions are somewhat similar to functions that discriminate among multiple solutions in a protein-protein docking exercise. The performances of both of these functions on state-of-the-art databases were comparable if not better than most currently available scoring functions. Thus, analogously to interfacial residues of protein chains associated (docked) with specific geometry, amino acids found in the native interior have to satisfy fairly stringent constraints in terms of both Sm and Em. The functions were also found to be useful for correctly identifying the same fold for two sequences with low sequence identity. Finally, inspired by the Ramachandran plot, we developed a plot of Sm versus Em (referred to as the complementarity plot) that identifies residues with suboptimal packing and electrostatics which appear to be correlated to coordinate errors. PMID:22713576

  10. Self-complementarity within proteins: bridging the gap between binding and folding.

    PubMed

    Basu, Sankar; Bhattacharyya, Dhananjay; Banerjee, Rahul

    2012-06-06

    Complementarity, in terms of both shape and electrostatic potential, has been quantitatively estimated at protein-protein interfaces and used extensively to predict the specific geometry of association between interacting proteins. In this work, we attempted to place both binding and folding on a common conceptual platform based on complementarity. To that end, we estimated (for the first time to our knowledge) electrostatic complementarity (Em) for residues buried within proteins. Em measures the correlation of surface electrostatic potential at protein interiors. The results show fairly uniform and significant values for all amino acids. Interestingly, hydrophobic side chains also attain appreciable complementarity primarily due to the trajectory of the main chain. Previous work from our laboratory characterized the surface (or shape) complementarity (Sm) of interior residues, and both of these measures have now been combined to derive two scoring functions to identify the native fold amid a set of decoys. These scoring functions are somewhat similar to functions that discriminate among multiple solutions in a protein-protein docking exercise. The performances of both of these functions on state-of-the-art databases were comparable if not better than most currently available scoring functions. Thus, analogously to interfacial residues of protein chains associated (docked) with specific geometry, amino acids found in the native interior have to satisfy fairly stringent constraints in terms of both Sm and Em. The functions were also found to be useful for correctly identifying the same fold for two sequences with low sequence identity. Finally, inspired by the Ramachandran plot, we developed a plot of Sm versus Em (referred to as the complementarity plot) that identifies residues with suboptimal packing and electrostatics which appear to be correlated to coordinate errors. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  11. Generalized uncertainty principle: implications for black hole complementarity

    NASA Astrophysics Data System (ADS)

    Chen, Pisin; Ong, Yen Chin; Yeom, Dong-han

    2014-12-01

    At the heart of the black hole information loss paradox and the firewall controversy lies the conflict between quantum mechanics and general relativity. Much has been said about quantum corrections to general relativity, but much less in the opposite direction. It is therefore crucial to examine possible corrections to quantum mechanics due to gravity. Indeed, the Heisenberg Uncertainty Principle is one profound feature of quantum mechanics, which nevertheless may receive correction when gravitational effects become important. Such generalized uncertainty principle [GUP] has been motivated from not only quite general considerations of quantum mechanics and gravity, but also string theoretic arguments. We examine the role of GUP in the context of black hole complementarity. We find that while complementarity can be violated by large N rescaling if one assumes only the Heisenberg's Uncertainty Principle, the application of GUP may save complementarity, but only if certain N -dependence is also assumed. This raises two important questions beyond the scope of this work, i.e., whether GUP really has the proposed form of N -dependence, and whether black hole complementarity is indeed correct.

  12. Presence of Trifolium repens Promotes Complementarity of Water Use and N Facilitation in Diverse Grass Mixtures.

    PubMed

    Hernandez, Pauline; Picon-Cochard, Catherine

    2016-01-01

    Legume species promote productivity and increase the digestibility of herbage in grasslands. Considerable experimental data also indicate that communities with legumes produce more above-ground biomass than is expected from monocultures. While it has been attributed to N facilitation, evidence to identify the mechanisms involved is still lacking and the role of complementarity in soil water acquisition by vertical root differentiation remains unclear. We used a 20-months mesocosm experiment to investigate the effects of species richness (single species, two- and five-species mixtures) and functional diversity (presence of the legume Trifolium repens) on a set of traits related to light, N and water use and measured at community level. We found a positive effect of Trifolium presence and abundance on biomass production and complementarity effects in the two-species mixtures from the second year. In addition the community traits related to water and N acquisition and use (leaf area, N, water-use efficiency, and deep root growth) were higher in the presence of Trifolium. With a multiple regression approach, we showed that the traits related to water acquisition and use were with N the main determinants of biomass production and complementarity effects in diverse mixtures. At shallow soil layers, lower root mass of Trifolium and higher soil moisture should increase soil water availability for the associated grass species. Conversely at deep soil layer, higher root growth and lower soil moisture mirror soil resource use increase of mixtures. Altogether, these results highlight N facilitation but almost soil vertical differentiation and thus complementarity for water acquisition and use in mixtures with Trifolium. Contrary to grass-Trifolium mixtures, no significant over-yielding was measured for grass mixtures even those having complementary traits (short and shallow vs. tall and deep). Thus, vertical complementarity for soil resources uptake in mixtures was not only dependant on the inherent root system architecture but also on root plasticity. We also observed a time-dependence for positive complementarity effects due to the slow development of Trifolium in mixtures, possibly induced by competition with grasses. Overall, our data underlined that soil water resource was an important driver of over-yielding and complementarity effects in Trifolium-grass mixtures.

  13. Information-reality complementarity: The role of measurements and quantum reference frames

    NASA Astrophysics Data System (ADS)

    Dieguez, P. R.; Angelo, R. M.

    2018-02-01

    Recently, a measure has been put forward which allows for the quantification of the degree of reality of an observable for a given preparation [Bilobran and Angelo, Europhys. Lett. 112, 40005 (2015), 10.1209/0295-5075/112/40005]. Here we employ this quantifier to establish, on formal grounds, relations among the concepts of measurement, information, and physical reality. After introducing mathematical objects that unify weak and projective measurements, we study scenarios showing that an arbitrary-intensity unrevealed measurement of a given observable generally leads to an increase of its reality and also of its incompatible observables. We derive a complementarity relation connecting an amount of information associated with the apparatus with the degree of irreality of the monitored observable. Specifically for pure states, we show that the entanglement with the apparatus precisely determines the amount by which the reality of the monitored observable increases. We also point out some mechanisms whereby the irreality of an observable can be generated. Finally, using the aforementioned tools, we construct a consistent picture to address the measurement problem.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku

    There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes andmore » fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.« less

  15. Weighted Watson-Crick automata

    NASA Astrophysics Data System (ADS)

    Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku

    2014-07-01

    There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes and fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.

  16. Complementarity in false memory illusions.

    PubMed

    Brainerd, C J; Reyna, V F

    2018-03-01

    For some years, the DRM illusion has been the most widely studied form of false memory. The consensus theoretical interpretation is that the illusion is a reality reversal, in which certain new words (critical distractors) are remembered as though they are old list words rather than as what they are-new words that are similar to old ones. This reality-reversal interpretation is supported by compelling lines of evidence, but prior experiments are limited by the fact that their memory tests only asked whether test items were old. We removed that limitation by also asking whether test items were new-similar. This more comprehensive methodology revealed that list words and critical distractors are remembered quite differently. Memory for list words is compensatory: They are remembered as old at high rates and remembered as new-similar at very low rates. In contrast, memory for critical distractors is complementary: They are remembered as both old and new-similar at high rates, which means that the DRM procedure induces a complementarity illusion rather than a reality reversal. The conjoint recognition model explains complementarity as a function of three retrieval processes (semantic familiarity, target recollection, and context recollection), and it predicts that complementarity can be driven up or down by varying the mix of those processes. Our experiments generated data on that prediction and introduced a convenient statistic, the complementarity ratio, which measures (a) the level of complementarity in memory performance and (b) whether its direction is reality-consistent or reality-reversed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Resilience and stability of a pelagic marine ecosystem

    PubMed Central

    Lindegren, Martin; Checkley, David M.; Ohman, Mark D.; Koslow, J. Anthony; Goericke, Ralf

    2016-01-01

    The accelerating loss of biodiversity and ecosystem services worldwide has accentuated a long-standing debate on the role of diversity in stabilizing ecological communities and has given rise to a field of research on biodiversity and ecosystem functioning (BEF). Although broad consensus has been reached regarding the positive BEF relationship, a number of important challenges remain unanswered. These primarily concern the underlying mechanisms by which diversity increases resilience and community stability, particularly the relative importance of statistical averaging and functional complementarity. Our understanding of these mechanisms relies heavily on theoretical and experimental studies, yet the degree to which theory adequately explains the dynamics and stability of natural ecosystems is largely unknown, especially in marine ecosystems. Using modelling and a unique 60-year dataset covering multiple trophic levels, we show that the pronounced multi-decadal variability of the Southern California Current System (SCCS) does not represent fundamental changes in ecosystem functioning, but a linear response to key environmental drivers channelled through bottom-up and physical control. Furthermore, we show strong temporal asynchrony between key species or functional groups within multiple trophic levels caused by opposite responses to these drivers. We argue that functional complementarity is the primary mechanism reducing community variability and promoting resilience and stability in the SCCS. PMID:26763697

  18. Heisenberg and the Interpretation of Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Camilleri, Kristian

    2011-09-01

    Preface; 1. Introduction; Part I. The Emergence of Quantum Mechanics: 2. Quantum mechanics and the principle of observability; 3. The problem of interpretation; Part II. The Heisenberg-Bohr Dialogue: 4. The wave-particle duality; 5. Indeterminacy and the limits of classical concepts: the turning point in Heisenberg's thought; 6. Heisenberg and Bohr: divergent viewpoints of complementarity; Part III. Heisenberg's Epistemology and Ontology of Quantum Mechanics: 7. The transformation of Kantian philosophy; 8. The linguistic turn in Heisenberg's thought; Conclusion; References; Index.

  19. Heisenberg and the Interpretation of Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Camilleri, Kristian

    2009-02-01

    Preface; 1. Introduction; Part I. The Emergence of Quantum Mechanics: 2. Quantum mechanics and the principle of observability; 3. The problem of interpretation; Part II. The Heisenberg-Bohr Dialogue: 4. The wave-particle duality; 5. Indeterminacy and the limits of classical concepts: the turning point in Heisenberg's thought; 6. Heisenberg and Bohr: divergent viewpoints of complementarity; Part III. Heisenberg's Epistemology and Ontology of Quantum Mechanics: 7. The transformation of Kantian philosophy; 8. The linguistic turn in Heisenberg's thought; Conclusion; References; Index.

  20. Convergence analysis of a monotonic penalty method for American option pricing

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Yang, Xiaoqi; Teo, Kok Lay

    2008-12-01

    This paper is devoted to study the convergence analysis of a monotonic penalty method for pricing American options. A monotonic penalty method is first proposed to solve the complementarity problem arising from the valuation of American options, which produces a nonlinear degenerated parabolic PDE with Black-Scholes operator. Based on the variational theory, the solvability and convergence properties of this penalty approach are established in a proper infinite dimensional space. Moreover, the convergence rate of the combination of two power penalty functions is obtained.

  1. Model of early self-replication based on covalent complementarity for a copolymer of glycerate-3-phosphate and glycerol-3-phosphate

    NASA Technical Reports Server (NTRS)

    Weber, Arthur L.

    1989-01-01

    Glyceraldehyde-3-phosphate acts as the substrate in a model of early self-replication of a phosphodiester copolymer of glycerate-3-phosphate and glycerol-3-phosphate. This model of self-replication is based on covalent complementarity in which information transfer is mediated by a single covalent bond, in contrast to multiple weak interactions that establish complementarity in nucleic acid replication. This replication model is connected to contemporary biochemistry through its use of glyceraldehyde-3-phosphate, a central metabolite of glycolysis and photosynthesis.

  2. Drinkers and Bettors: Investigating the Complementarity of Alcohol Consumption and Problem Gambling

    PubMed Central

    Maclean, Johanna Catherine; Ettner, Susan L.

    2009-01-01

    Regulated gambling is a multi-billion dollar industry in the United States with greater than 100 percent increases in revenue over the past decade. Along with this rise in gambling popularity and gaming options comes an increased risk of addiction and the associated social costs. This paper focuses on the effect of alcohol use on gambling-related problems. Variables correlated with both alcohol use and gambling may be difficult to observe, and the inability to include these items in empirical models may bias coefficient estimates. After addressing the endogeneity of alcohol use when appropriate, we find strong evidence that problematic gambling and alcohol consumption are complementary activities. PMID:18430523

  3. Science, education and industry information resources complementarity as a basis for design of knowledge management systems

    NASA Astrophysics Data System (ADS)

    Maksimov, N. V.; Tikhomirov, G. V.; Golitsyna, O. L.

    2017-01-01

    The main problems and circumstances that influence the processes of creating effective knowledge management systems were described. These problems particularly include high species diversity of instruments for knowledge representation, lack of adequate lingware, including formal representation of semantic relationships. For semantic data descriptions development a conceptual model of the subject area and a conceptual-lexical system should be designed on proposals of ISO-15926 standard. It is proposed to conduct an information integration of educational and production processes on the basis of information systems technologies. Integrated knowledge management system information environment combines both traditional information resources and specific information resources of subject domain including task context and implicit/tacit knowledge.

  4. Phytoplankton Assemblage Characteristics in Recurrently Fluctuating Environments

    PubMed Central

    Roelke, Daniel L.; Spatharis, Sofie

    2015-01-01

    Annual variations in biogeochemical and physical processes can lead to nutrient variability and seasonal patterns in phytoplankton productivity and assemblage structure. In many coastal systems river inflow and water exchange with the ocean varies seasonally, and alternating periods can arise where the nutrient most limiting to phytoplankton growth switches. Transitions between these alternating periods can be sudden or gradual and this depends on human activities, such as reservoir construction and interbasin water transfers. How such activities might influence phytoplankton assemblages is largely unknown. Here, we employed a multispecies, multi-nutrient model to explore how nutrient loading switching mode might affect characteristics of phytoplankton assemblages. The model is based on the Monod-relationship, predicting an instantaneous growth rate from ambient inorganic nutrient concentrations whereas the limiting nutrient at any given time was determined by Liebig’s Law of the Minimum. Our simulated phytoplankton assemblages self-organized from species rich pools over a 15-year period, and only the surviving species were considered as assemblage members. Using the model, we explored the interactive effects of complementarity level in trait trade-offs within phytoplankton assemblages and the amount of noise in the resource supply concentrations. We found that the effect of shift from a sudden resource supply transition to a gradual one, as observed in systems impacted by watershed development, was dependent on the level of complementarity. In the extremes, phytoplankton species richness and relative overyielding increased when complementarity was lowest, and phytoplankton biomass increased greatly when complementarity was highest. For low-complementarity simulations, the persistence of poorer-performing phytoplankton species of intermediate R*s led to higher richness and relative overyielding. For high-complementarity simulations, the formation of phytoplankton species clusters and niche compression enabled higher biomass accumulation. Our findings suggest that an understanding of factors influencing the emergence of life history traits important to complementarity is necessary to predict the impact of watershed development on phytoplankton productivity and assemblage structure. PMID:25799563

  5. Black hole complementarity with the generalized uncertainty principle in Gravity's Rainbow

    NASA Astrophysics Data System (ADS)

    Gim, Yongwan; Um, Hwajin; Kim, Wontae

    2018-02-01

    When gravitation is combined with quantum theory, the Heisenberg uncertainty principle could be extended to the generalized uncertainty principle accompanying a minimal length. To see how the generalized uncertainty principle works in the context of black hole complementarity, we calculate the required energy to duplicate information for the Schwarzschild black hole. It shows that the duplication of information is not allowed and black hole complementarity is still valid even assuming the generalized uncertainty principle. On the other hand, the generalized uncertainty principle with the minimal length could lead to a modification of the conventional dispersion relation in light of Gravity's Rainbow, where the minimal length is also invariant as well as the speed of light. Revisiting the gedanken experiment, we show that the no-cloning theorem for black hole complementarity can be made valid in the regime of Gravity's Rainbow on a certain combination of parameters.

  6. Tree species diversity promotes aboveground carbon storage through functional diversity and functional dominance.

    PubMed

    Mensah, Sylvanus; Veldtman, Ruan; Assogbadjo, Achille E; Glèlè Kakaï, Romain; Seifert, Thomas

    2016-10-01

    The relationship between biodiversity and ecosystem function has increasingly been debated as the cornerstone of the processes behind ecosystem services delivery. Experimental and natural field-based studies have come up with nonconsistent patterns of biodiversity-ecosystem function, supporting either niche complementarity or selection effects hypothesis. Here, we used aboveground carbon (AGC) storage as proxy for ecosystem function in a South African mistbelt forest, and analyzed its relationship with species diversity, through functional diversity and functional dominance. We hypothesized that (1) diversity influences AGC through functional diversity and functional dominance effects; and (2) effects of diversity on AGC would be greater for functional dominance than for functional diversity. Community weight mean (CWM) of functional traits (wood density, specific leaf area, and maximum plant height) were calculated to assess functional dominance (selection effects). As for functional diversity (complementarity effects), multitrait functional diversity indices were computed. The first hypothesis was tested using structural equation modeling. For the second hypothesis, effects of environmental variables such as slope and altitude were tested first, and separate linear mixed-effects models were fitted afterward for functional diversity, functional dominance, and both. Results showed that AGC varied significantly along the slope gradient, with lower values at steeper sites. Species diversity (richness) had positive relationship with AGC, even when slope effects were considered. As predicted, diversity effects on AGC were mediated through functional diversity and functional dominance, suggesting that both the niche complementarity and the selection effects are not exclusively affecting carbon storage. However, the effects were greater for functional diversity than for functional dominance. Furthermore, functional dominance effects were strongly transmitted by CWM of maximum plant height, reflecting the importance of forest vertical stratification for diversity-carbon relationship. We therefore argue for stronger complementary effects that would be induced also by complementary light-use efficiency of tree and species growing in the understory layer.

  7. Educational Finance Policy: A Search for Complementarities.

    ERIC Educational Resources Information Center

    Geske, Terry G.

    1983-01-01

    An overview of recent state level policy developments and policy analysis research as related to equity and efficiency objectives in public school finance is presented. Emphasis is placed on identifying complementarities, rather than the tradeoffs, between equity and efficiency criteria. (Author/LC)

  8. Is the firewall consistent? Gedanken experiments on black hole complementarity and firewall proposal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, Dong-il; Lee, Bum-Hoon; Yeom, Dong-han, E-mail: dongil.j.hwang@gmail.com, E-mail: bhl@sogang.ac.kr, E-mail: innocent.yeom@gmail.com

    2013-01-01

    In this paper, we discuss the black hole complementarity and the firewall proposal at length. Black hole complementarity is inevitable if we assume the following five things: unitarity, entropy-area formula, existence of an information observer, semi-classical quantum field theory for an asymptotic observer, and the general relativity for an in-falling observer. However, large N rescaling and the AMPS argument show that black hole complementarity is inconsistent. To salvage the basic philosophy of the black hole complementarity, AMPS introduced a firewall around the horizon. According to large N rescaling, the firewall should be located close to the apparent horizon. We investigatemore » the consistency of the firewall with the two critical conditions: the firewall should be near the time-like apparent horizon and it should not affect the future infinity. Concerning this, we have introduced a gravitational collapse with a false vacuum lump which can generate a spacetime structure with disconnected apparent horizons. This reveals a situation that there is a firewall outside of the event horizon, while the apparent horizon is absent. Therefore, the firewall, if it exists, not only does modify the general relativity for an in-falling observer, but also modify the semi-classical quantum field theory for an asymptotic observer.« less

  9. Is the firewall consistent? Gedanken experiments on black hole complementarity and firewall proposal

    NASA Astrophysics Data System (ADS)

    Hwang, Dong-il; Lee, Bum-Hoon; Yeom, Dong-han

    2013-01-01

    In this paper, we discuss the black hole complementarity and the firewall proposal at length. Black hole complementarity is inevitable if we assume the following five things: unitarity, entropy-area formula, existence of an information observer, semi-classical quantum field theory for an asymptotic observer, and the general relativity for an in-falling observer. However, large N rescaling and the AMPS argument show that black hole complementarity is inconsistent. To salvage the basic philosophy of the black hole complementarity, AMPS introduced a firewall around the horizon. According to large N rescaling, the firewall should be located close to the apparent horizon. We investigate the consistency of the firewall with the two critical conditions: the firewall should be near the time-like apparent horizon and it should not affect the future infinity. Concerning this, we have introduced a gravitational collapse with a false vacuum lump which can generate a spacetime structure with disconnected apparent horizons. This reveals a situation that there is a firewall outside of the event horizon, while the apparent horizon is absent. Therefore, the firewall, if it exists, not only does modify the general relativity for an in-falling observer, but also modify the semi-classical quantum field theory for an asymptotic observer.

  10. Intron self-complementarity enforces exon inclusion in a yeast pre-mRNA

    PubMed Central

    Howe, Kenneth James; Ares, Manuel

    1997-01-01

    Skipping of internal exons during removal of introns from pre-mRNA must be avoided for proper expression of most eukaryotic genes. Despite significant understanding of the mechanics of intron removal, mechanisms that ensure inclusion of internal exons in multi-intron pre-mRNAs remain mysterious. Using a natural two-intron yeast gene, we have identified distinct RNA–RNA complementarities within each intron that prevent exon skipping and ensure inclusion of internal exons. We show that these complementarities are positioned to act as intron identity elements, bringing together only the appropriate 5′ splice sites and branchpoints. Destroying either intron self-complementarity allows exon skipping to occur, and restoring the complementarity using compensatory mutations rescues exon inclusion, indicating that the elements act through formation of RNA secondary structure. Introducing new pairing potential between regions near the 5′ splice site of intron 1 and the branchpoint of intron 2 dramatically enhances exon skipping. Similar elements identified in single intron yeast genes contribute to splicing efficiency. Our results illustrate how intron secondary structure serves to coordinate splice site pairing and enforce exon inclusion. We suggest that similar elements in vertebrate genes could assist in the splicing of very large introns and in the evolution of alternative splicing. PMID:9356473

  11. Functional traits explain ecosystem function through opposing mechanisms.

    PubMed

    Cadotte, Marc W

    2017-08-01

    The ability to explain why multispecies assemblages produce greater biomass compared to monocultures, has been a central goal in the quest to understand biodiversity effects on ecosystem function. Species contributions to ecosystem function can be driven by two processes: niche complementarity and a selection effect that is influenced by fitness (competitive) differences, and both can be approximated with measures of species' traits. It has been hypothesised that fitness differences are associated with few, singular traits while complementarity requires multidimensional trait measures. Here, using experimental data from plant assemblages, I show that the selection effect was strongest when trait dissimilarity was low, while complementarity was greatest with high trait dissimilarity. Selection effects were best explained by a single trait, plant height. Complementarity was correlated with dissimilarity across multiple traits, representing above and below ground processes. By identifying the relevant traits linked to ecosystem function, we obtain the ability to predict combinations of species that will maximise ecosystem function. © 2017 John Wiley & Sons Ltd/CNRS.

  12. Parametrization of fermion mixing matrices in Kobayashi-Maskawa form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin Nan; Ma Boqiang; Center for High Energy Physics, Peking University, Beijing 100871

    2011-02-01

    Recent works show that the original Kobayashi-Maskawa (KM) form of fermion mixing matrix exhibits some advantages, especially when discussing problems such as unitarity boomerangs and maximal CP violation hypothesis. Therefore, the KM form of fermion mixing matrix is systematically studied in this paper. Starting with a general triminimal expansion of the KM matrix, we discuss the triminimal and Wolfenstein-like parametrizations with different basis matrices in detail. The quark-lepton complementarity relations play an important role in our discussions on describing quark mixing and lepton mixing in a unified way.

  13. Information-reality complementarity in photonic weak measurements

    NASA Astrophysics Data System (ADS)

    Mancino, Luca; Sbroscia, Marco; Roccia, Emanuele; Gianani, Ilaria; Cimini, Valeria; Paternostro, Mauro; Barbieri, Marco

    2018-06-01

    The emergence of realistic properties is a key problem in understanding the quantum-to-classical transition. In this respect, measurements represent a way to interface quantum systems with the macroscopic world: these can be driven in the weak regime, where a reduced back-action can be imparted by choosing meter states able to extract different amounts of information. Here we explore the implications of such weak measurement for the variation of realistic properties of two-level quantum systems pre- and postmeasurement, and extend our investigations to the case of open systems implementing the measurements.

  14. The promise of complementarity: Using the methods of foresight for health workforce planning.

    PubMed

    Rees, Gareth H; Crampton, Peter; Gauld, Robin; MacDonell, Stephen

    2018-05-01

    Health workforce planning aims to meet a health system's needs with a sustainable and fit-for-purpose workforce, although its efficacy is reduced in conditions of uncertainty. This PhD breakthrough article offers foresight as a means of addressing this uncertainty and models its complementarity in the context of the health workforce planning problem. The article summarises the findings of a two-case multi-phase mixed method study that incorporates actor analysis, scenario development and policy Delphi. This reveals a few dominant actors of considerable influence who are in conflict over a few critical workforce issues. Using these to augment normative scenarios, developed from existing clinically developed model of care visions, a number of exploratory alternative descriptions of future workforce situations are produced for each case. Their analysis reveals that these scenarios are a reasonable facsimile of plausible futures, though some are favoured over others. Policy directions to support these favoured aspects can also be identified. This novel approach offers workforce planners and policy makers some guidance on the use of complimentary data, methods to overcome the limitations of conventional workforce forecasting and a framework for exploring the complexities and ambiguities of a health workforce's evolution.

  15. A Physicist's Quest in Biology: Max Delbrück and "Complementarity".

    PubMed

    Strauss, Bernard S

    2017-06-01

    Max Delbrück was trained as a physicist but made his major contribution in biology and ultimately shared a Nobel Prize in Physiology or Medicine. He was the acknowledged leader of the founders of molecular biology, yet he failed to achieve his key scientific goals. His ultimate scientific aim was to find evidence for physical laws unique to biology: so-called "complementarity." He never did. The specific problem he initially wanted to solve was the nature of biological replication but the discovery of the mechanism of replication was made by others, in large part because of his disdain for the details of biochemistry. His later career was spent investigating the effect of light on the fungus Phycomyces , a topic that turned out to be of limited general interest. He was known both for his informality but also for his legendary displays of devastating criticism. His life and that of some of his closest colleagues was acted out against a background of a world in conflict. This essay describes the man and his career and searches for an explanation of his profound influence. Copyright © 2017 by the Genetics Society of America.

  16. Experimental investigation of halogen-bond hard-soft acid-base complementarity.

    PubMed

    Riel, Asia Marie S; Jessop, Morly J; Decato, Daniel A; Massena, Casey J; Nascimento, Vinicius R; Berryman, Orion B

    2017-04-01

    The halogen bond (XB) is a topical noncovalent interaction of rapidly increasing importance. The XB employs a `soft' donor atom in comparison to the `hard' proton of the hydrogen bond (HB). This difference has led to the hypothesis that XBs can form more favorable interactions with `soft' bases than HBs. While computational studies have supported this suggestion, solution and solid-state data are lacking. Here, XB soft-soft complementarity is investigated with a bidentate receptor that shows similar associations with neutral carbonyls and heavy chalcogen analogs. The solution speciation and XB soft-soft complementarity is supported by four crystal structures containing neutral and anionic soft Lewis bases.

  17. Climate change mitigation and adaptation in the land use sector: from complementarity to synergy.

    PubMed

    Duguma, Lalisa A; Minang, Peter A; van Noordwijk, Meine

    2014-09-01

    Currently, mitigation and adaptation measures are handled separately, due to differences in priorities for the measures and segregated planning and implementation policies at international and national levels. There is a growing argument that synergistic approaches to adaptation and mitigation could bring substantial benefits at multiple scales in the land use sector. Nonetheless, efforts to implement synergies between adaptation and mitigation measures are rare due to the weak conceptual framing of the approach and constraining policy issues. In this paper, we explore the attributes of synergy and the necessary enabling conditions and discuss, as an example, experience with the Ngitili system in Tanzania that serves both adaptation and mitigation functions. An in-depth look into the current practices suggests that more emphasis is laid on complementarity-i.e., mitigation projects providing adaptation co-benefits and vice versa rather than on synergy. Unlike complementarity, synergy should emphasize functionally sustainable landscape systems in which adaptation and mitigation are optimized as part of multiple functions. We argue that the current practice of seeking co-benefits (complementarity) is a necessary but insufficient step toward addressing synergy. Moving forward from complementarity will require a paradigm shift from current compartmentalization between mitigation and adaptation to systems thinking at landscape scale. However, enabling policy, institutional, and investment conditions need to be developed at global, national, and local levels to achieve synergistic goals.

  18. The eukaryotic cell originated in the integration and redistribution of hyperstructures from communities of prokaryotic cells based on molecular complementarity.

    PubMed

    Norris, Vic; Root-Bernstein, Robert

    2009-06-04

    In the "ecosystems-first" approach to the origins of life, networks of non-covalent assemblies of molecules (composomes), rather than individual protocells, evolved under the constraints of molecular complementarity. Composomes evolved into the hyperstructures of modern bacteria. We extend the ecosystems-first approach to explain the origin of eukaryotic cells through the integration of mixed populations of bacteria. We suggest that mutualism and symbiosis resulted in cellular mergers entailing the loss of redundant hyperstructures, the uncoupling of transcription and translation, and the emergence of introns and multiple chromosomes. Molecular complementarity also facilitated integration of bacterial hyperstructures to perform cytoskeletal and movement functions.

  19. Computational studies of new potential antimalarial compounds Stereoelectronic complementarity with the receptor

    NASA Astrophysics Data System (ADS)

    Portela, César; Afonso, Carlos M. M.; Pinto, Madalena M. M.; João Ramos, Maria

    2003-09-01

    One of the most important pharmacological mechanisms of antimalarial action is the inhibition of the aggregation of hematin into hemozoin. We present a group of new potential antimalarial molecules for which we have performed a DFT study of their stereoelectronic properties. Additionally, the same calculations were carried out for the two putative drug receptors involved in the referred activity, i.e., hematin μ-oxo dimer and hemozoin. A complementarity between the structural and electronic profiles of the planned molecules and the receptors can be observed. A docking study of the new compounds in relation to the two putative receptors is also presented, providing a correlation with the defined electrostatic complementarity.

  20. Compatibility and Complementarity of Classroom Ecology and Didactique Research Perspectives in Physical Education

    ERIC Educational Resources Information Center

    Leriche, Jérôme; Desbiens, Jean-François; Amade-Escot, Chantal; Tinning, Richard

    2016-01-01

    A large diversity of theoretical frameworks exists in the physical education literature. This article focuses on two of those frameworks to examine their compatibility and their complementarity. The classroom ecology paradigm concentrates on the balance between three task systems, two vectors, and programs of actions proposed by the physical…

  1. Evaluating complementary networks of restoration plantings for landscape-scale occurrence of temporally dynamic species.

    PubMed

    Ikin, Karen; Tulloch, Ayesha; Gibbons, Philip; Ansell, Dean; Seddon, Julian; Lindenmayer, David

    2016-10-01

    Multibillion dollar investments in land restoration make it critical that conservation goals are achieved cost-effectively. Approaches developed for systematic conservation planning offer opportunities to evaluate landscape-scale, temporally dynamic biodiversity outcomes from restoration and improve on traditional approaches that focus on the most species-rich plantings. We investigated whether it is possible to apply a complementarity-based approach to evaluate the extent to which an existing network of restoration plantings meets representation targets. Using a case study of woodland birds of conservation concern in southeastern Australia, we compared complementarity-based selections of plantings based on temporally dynamic species occurrences with selections based on static species occurrences and selections based on ranking plantings by species richness. The dynamic complementarity approach, which incorporated species occurrences over 5 years, resulted in higher species occurrences and proportion of targets met compared with the static complementarity approach, in which species occurrences were taken at a single point in time. For equivalent cost, the dynamic complementarity approach also always resulted in higher average minimum percent occurrence of species maintained through time and a higher proportion of the bird community meeting representation targets compared with the species-richness approach. Plantings selected under the complementarity approaches represented the full range of planting attributes, whereas those selected under the species-richness approach were larger in size. Our results suggest that future restoration policy should not attempt to achieve all conservation goals within individual plantings, but should instead capitalize on restoration opportunities as they arise to achieve collective value of multiple plantings across the landscape. Networks of restoration plantings with complementary attributes of age, size, vegetation structure, and landscape context lead to considerably better outcomes than conventional restoration objectives of site-scale species richness and are crucial for allocating restoration investment wisely to reach desired conservation goals. © 2016 Society for Conservation Biology.

  2. Numerically pricing American options under the generalized mixed fractional Brownian motion model

    NASA Astrophysics Data System (ADS)

    Chen, Wenting; Yan, Bowen; Lian, Guanghua; Zhang, Ying

    2016-06-01

    In this paper, we introduce a robust numerical method, based on the upwind scheme, for the pricing of American puts under the generalized mixed fractional Brownian motion (GMFBM) model. By using portfolio analysis and applying the Wick-Itô formula, a partial differential equation (PDE) governing the prices of vanilla options under the GMFBM is successfully derived for the first time. Based on this, we formulate the pricing of American puts under the current model as a linear complementarity problem (LCP). Unlike the classical Black-Scholes (B-S) model or the generalized B-S model discussed in Cen and Le (2011), the newly obtained LCP under the GMFBM model is difficult to be solved accurately because of the numerical instability which results from the degeneration of the governing PDE as time approaches zero. To overcome this difficulty, a numerical approach based on the upwind scheme is adopted. It is shown that the coefficient matrix of the current method is an M-matrix, which ensures its stability in the maximum-norm sense. Remarkably, we have managed to provide a sharp theoretic error estimate for the current method, which is further verified numerically. The results of various numerical experiments also suggest that this new approach is quite accurate, and can be easily extended to price other types of financial derivatives with an American-style exercise feature under the GMFBM model.

  3. Diagnostic Interference: People’s Use of Information in Incomplete Bayesian World Problems

    DTIC Science & Technology

    1990-07-01

    them was the one who did it. Stephen and Paul are 5 year old twins. One afternoon their mother hired a new babysitter so she could go out to do errands...Paul broke the lamp? _ Reliability. Stephen’s and Paul’s mother enjoys dressing them alike. Before she left, she had said to the babysitter " New ...complementarity); and it one already has a degree of belief p(H) in proposition H, and one is given new evidence E pertinent to the truth of H, one can use a

  4. The Impact of Electronic Commerce on the Publishing Industry: Towards a Business Value Complementarity Framework of Electronic Publishing.

    ERIC Educational Resources Information Center

    Scupola, Ada

    1999-01-01

    Discussion of the publishing industry and its use of information and communication technologies focuses on the way in which electronic-commerce technologies are changing and could change the publishing processes, and develops a business complementarity model of electronic publishing to maximize profitability and improve the competitive position.…

  5. Complementary Constrains on Component based Multiphase Flow Problems, Should It Be Implemented Locally or Globally?

    NASA Astrophysics Data System (ADS)

    Shao, H.; Huang, Y.; Kolditz, O.

    2015-12-01

    Multiphase flow problems are numerically difficult to solve, as it often contains nonlinear Phase transition phenomena A conventional technique is to introduce the complementarity constraints where fluid properties such as liquid saturations are confined within a physically reasonable range. Based on such constraints, the mathematical model can be reformulated into a system of nonlinear partial differential equations coupled with variational inequalities. They can be then numerically handled by optimization algorithms. In this work, two different approaches utilizing the complementarity constraints based on persistent primary variables formulation[4] are implemented and investigated. The first approach proposed by Marchand et.al[1] is using "local complementary constraints", i.e. coupling the constraints with the local constitutive equations. The second approach[2],[3] , namely the "global complementary constrains", applies the constraints globally with the mass conservation equation. We will discuss how these two approaches are applied to solve non-isothermal componential multiphase flow problem with the phase change phenomenon. Several benchmarks will be presented for investigating the overall numerical performance of different approaches. The advantages and disadvantages of different models will also be concluded. References[1] E.Marchand, T.Mueller and P.Knabner. Fully coupled generalized hybrid-mixed finite element approximation of two-phase two-component flow in porous media. Part I: formulation and properties of the mathematical model, Computational Geosciences 17(2): 431-442, (2013). [2] A. Lauser, C. Hager, R. Helmig, B. Wohlmuth. A new approach for phase transitions in miscible multi-phase flow in porous media. Water Resour., 34,(2011), 957-966. [3] J. Jaffré, and A. Sboui. Henry's Law and Gas Phase Disappearance. Transp. Porous Media. 82, (2010), 521-526. [4] A. Bourgeat, M. Jurak and F. Smaï. Two-phase partially miscible flow and transport modeling in porous media : application to gas migration in a nuclear waste repository, Comp.Geosciences. (2009), Volume 13, Number 1, 29-42.

  6. Levels of reconstruction as complementarity in mixed methods research: a social theory-based conceptual framework for integrating qualitative and quantitative research.

    PubMed

    Carroll, Linda J; Rothe, J Peter

    2010-09-01

    Like other areas of health research, there has been increasing use of qualitative methods to study public health problems such as injuries and injury prevention. Likewise, the integration of qualitative and quantitative research (mixed-methods) is beginning to assume a more prominent role in public health studies. Likewise, using mixed-methods has great potential for gaining a broad and comprehensive understanding of injuries and their prevention. However, qualitative and quantitative research methods are based on two inherently different paradigms, and their integration requires a conceptual framework that permits the unity of these two methods. We present a theory-driven framework for viewing qualitative and quantitative research, which enables us to integrate them in a conceptually sound and useful manner. This framework has its foundation within the philosophical concept of complementarity, as espoused in the physical and social sciences, and draws on Bergson's metaphysical work on the 'ways of knowing'. Through understanding how data are constructed and reconstructed, and the different levels of meaning that can be ascribed to qualitative and quantitative findings, we can use a mixed-methods approach to gain a conceptually sound, holistic knowledge about injury phenomena that will enhance our development of relevant and successful interventions.

  7. ClusPro: an automated docking and discrimination method for the prediction of protein complexes.

    PubMed

    Comeau, Stephen R; Gatchell, David W; Vajda, Sandor; Camacho, Carlos J

    2004-01-01

    Predicting protein interactions is one of the most challenging problems in functional genomics. Given two proteins known to interact, current docking methods evaluate billions of docked conformations by simple scoring functions, and in addition to near-native structures yield many false positives, i.e. structures with good surface complementarity but far from the native. We have developed a fast algorithm for filtering docked conformations with good surface complementarity, and ranking them based on their clustering properties. The free energy filters select complexes with lowest desolvation and electrostatic energies. Clustering is then used to smooth the local minima and to select the ones with the broadest energy wells-a property associated with the free energy at the binding site. The robustness of the method was tested on sets of 2000 docked conformations generated for 48 pairs of interacting proteins. In 31 of these cases, the top 10 predictions include at least one near-native complex, with an average RMSD of 5 A from the native structure. The docking and discrimination method also provides good results for a number of complexes that were used as targets in the Critical Assessment of PRedictions of Interactions experiment. The fully automated docking and discrimination server ClusPro can be found at http://structure.bu.edu

  8. Complementarity among four highly productive grassland species depends on resource availability.

    PubMed

    Roscher, Christiane; Schmid, Bernhard; Kolle, Olaf; Schulze, Ernst-Detlef

    2016-06-01

    Positive species richness-productivity relationships are common in biodiversity experiments, but how resource availability modifies biodiversity effects in grass-legume mixtures composed of highly productive species is yet to be explicitly tested. We addressed this question by choosing two grasses (Arrhenatherum elatius and Dactylis glomerata) and two legumes (Medicago × varia and Onobrychis viciifolia) which are highly productive in monocultures and dominant in mixtures (the Jena Experiment). We established monocultures, all possible two- and three-species mixtures, and the four-species mixture under three different resource supply conditions (control, fertilization, and shading). Compared to the control, community biomass production decreased under shading (-56 %) and increased under fertilization (+12 %). Net diversity effects (i.e., mixture minus mean monoculture biomass) were positive in the control and under shading (on average +15 and +72 %, respectively) and negative under fertilization (-10 %). Positive complementarity effects in the control suggested resource partitioning and facilitation of growth through symbiotic N2 fixation by legumes. Positive complementarity effects under shading indicated that resource partitioning is also possible when growth is carbon-limited. Negative complementarity effects under fertilization suggested that external nutrient supply depressed facilitative grass-legume interactions due to increased competition for light. Selection effects, which quantify the dominance of species with particularly high monoculture biomasses in the mixture, were generally small compared to complementarity effects, and indicated that these species had comparable competitive strengths in the mixture. Our study shows that resource availability has a strong impact on the occurrence of positive diversity effects among tall and highly productive grass and legume species.

  9. On the Embedded Complementarity of Agent-Based and Aggregate Reasoning in Students' Developing Understanding of Dynamic Systems

    ERIC Educational Resources Information Center

    Stroup, Walter M.; Wilensky, Uri

    2014-01-01

    Placed in the larger context of broadening the engagement with systems dynamics and complexity theory in school-aged learning and teaching, this paper is intended to introduce, situate, and illustrate--with results from the use of network supported participatory simulations in classrooms--a stance we call "embedded complementarity" as an…

  10. Has Complementarity between Employer-Sponsored Training and Education in the U.S. Changed during the 2000s?

    ERIC Educational Resources Information Center

    Waddoups, C. Jeffrey

    2018-01-01

    The study reveals that the positive correlation between formal education and job training (complementarity) has weakened during the 2000s. Using U.S. Census Bureau data from the Survey of Income and Program Participation, the study finds that although workers in all categories of educational attainment felt the decline, the effects were strongest…

  11. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  12. Leveraging genome-wide datasets to quantify the functional role of the anti-Shine-Dalgarno sequence in regulating translation efficiency.

    PubMed

    Hockenberry, Adam J; Pah, Adam R; Jewett, Michael C; Amaral, Luís A N

    2017-01-01

    Studies dating back to the 1970s established that sequence complementarity between the anti-Shine-Dalgarno (aSD) sequence on prokaryotic ribosomes and the 5' untranslated region of mRNAs helps to facilitate translation initiation. The optimal location of aSD sequence binding relative to the start codon, the full extents of the aSD sequence and the functional form of the relationship between aSD sequence complementarity and translation efficiency have not been fully resolved. Here, we investigate these relationships by leveraging the sequence diversity of endogenous genes and recently available genome-wide estimates of translation efficiency. We show that-after accounting for predicted mRNA structure-aSD sequence complementarity increases the translation of endogenous mRNAs by roughly 50%. Further, we observe that this relationship is nonlinear, with translation efficiency maximized for mRNAs with intermediate levels of aSD sequence complementarity. The mechanistic insights that we observe are highly robust: we find nearly identical results in multiple datasets spanning three distantly related bacteria. Further, we verify our main conclusions by re-analysing a controlled experimental dataset. © 2017 The Authors.

  13. Plant diversity effects on grassland productivity are robust to both nutrient enrichment and drought

    PubMed Central

    Isbell, Forest; Manning, Pete; Connolly, John; Bruelheide, Helge; Ebeling, Anne; Roscher, Christiane; van Ruijven, Jasper; Weigelt, Alexandra; Wilsey, Brian; Beierkuhnlein, Carl; de Luca, Enrica; Griffin, John N.; Hautier, Yann; Hector, Andy; Jentsch, Anke; Kreyling, Jürgen; Lanta, Vojtech; Loreau, Michel; Meyer, Sebastian T.; Mori, Akira S.; Naeem, Shahid; Palmborg, Cecilia; Polley, H. Wayne; Reich, Peter B.; Schmid, Bernhard; Siebenkäs, Alrun; Seabloom, Eric; Thakur, Madhav P.; Tilman, David; Vogel, Anja; Eisenhauer, Nico

    2016-01-01

    Global change drivers are rapidly altering resource availability and biodiversity. While there is consensus that greater biodiversity increases the functioning of ecosystems, the extent to which biodiversity buffers ecosystem productivity in response to changes in resource availability remains unclear. We use data from 16 grassland experiments across North America and Europe that manipulated plant species richness and one of two essential resources—soil nutrients or water—to assess the direction and strength of the interaction between plant diversity and resource alteration on above-ground productivity and net biodiversity, complementarity, and selection effects. Despite strong increases in productivity with nutrient addition and decreases in productivity with drought, we found that resource alterations did not alter biodiversity–ecosystem functioning relationships. Our results suggest that these relationships are largely determined by increases in complementarity effects along plant species richness gradients. Although nutrient addition reduced complementarity effects at high diversity, this appears to be due to high biomass in monocultures under nutrient enrichment. Our results indicate that diversity and the complementarity of species are important regulators of grassland ecosystem productivity, regardless of changes in other drivers of ecosystem function. PMID:27114579

  14. By-product mutualism with evolving common enemies.

    PubMed

    De Jaegher, Kris

    2017-05-07

    The common-enemy hypothesis of by-product mutualism states that organisms cooperate when it is in their individual interests to do so, with benefits for other organisms arising as a by-product; in particular, such cooperation is hypothesized to arise when organisms face the common enemy of a sufficiently adverse environment. In an evolutionary game where two defenders can cooperate to defend a common resource, this paper analyzes the common-enemy hypothesis when adversity is endogenous, in that an attacker sets the number of attacks. As a benchmark, we first consider exogenous adversity, where adversity is not subject to evolution. In this case, the common-enemy hypothesis is predicted when the degree of complementarity between defenders' defensive efforts is sufficiently low. When the degree of complementarity is high, the hypothesis is predicted only when cooperation costs are high; when cooperation costs are instead low, a competing hypothesis is predicted, where adversity discourages cooperation. Second, we consider the case of endogenous adversity. In this case, we continue to predict the competing hypothesis for a high degree of complementarity and low cooperation costs. The common-enemy hypothesis, however, only continues to be predicted for the lowest degrees of complementarity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. A Metasynthesis of the Complementarity of Culturally Responsive and Inquiry-Based Science Education in K-12 Settings: Implications for Advancing Equitable Science Teaching and Learning

    ERIC Educational Resources Information Center

    Brown, Julie C.

    2017-01-01

    Employing metasynthesis as a method, this study examined 52 empirical articles on culturally relevant and responsive science education in K-12 settings to determine the nature and scope of complementarity between culturally responsive and inquiry-based science practices (i.e., science and engineering practices identified in the National Research…

  16. Individual-based analyses reveal limited functional overlap in a coral reef fish community.

    PubMed

    Brandl, Simon J; Bellwood, David R

    2014-05-01

    Detailed knowledge of a species' functional niche is crucial for the study of ecological communities and processes. The extent of niche overlap, functional redundancy and functional complementarity is of particular importance if we are to understand ecosystem processes and their vulnerability to disturbances. Coral reefs are among the most threatened marine systems, and anthropogenic activity is changing the functional composition of reefs. The loss of herbivorous fishes is particularly concerning as the removal of algae is crucial for the growth and survival of corals. Yet, the foraging patterns of the various herbivorous fish species are poorly understood. Using a multidimensional framework, we present novel individual-based analyses of species' realized functional niches, which we apply to a herbivorous coral reef fish community. In calculating niche volumes for 21 species, based on their microhabitat utilization patterns during foraging, and computing functional overlaps, we provide a measurement of functional redundancy or complementarity. Complementarity is the inverse of redundancy and is defined as less than 50% overlap in niche volumes. The analyses reveal extensive complementarity with an average functional overlap of just 15.2%. Furthermore, the analyses divide herbivorous reef fishes into two broad groups. The first group (predominantly surgeonfishes and parrotfishes) comprises species feeding on exposed surfaces and predominantly open reef matrix or sandy substrata, resulting in small niche volumes and extensive complementarity. In contrast, the second group consists of species (predominantly rabbitfishes) that feed over a wider range of microhabitats, penetrating the reef matrix to exploit concealed surfaces of various substratum types. These species show high variation among individuals, leading to large niche volumes, more overlap and less complementarity. These results may have crucial consequences for our understanding of herbivorous processes on coral reefs, as algal removal appears to depend strongly on species-specific microhabitat utilization patterns of herbivores. Furthermore, the results emphasize the capacity of the individual-based analyses to reveal variation in the functional niches of species, even in high-diversity systems such as coral reefs, demonstrating its potential applicability to other high-diversity ecosystems. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.

  17. Horizons of description: Black holes and complementarity

    NASA Astrophysics Data System (ADS)

    Bokulich, Peter Joshua Martin

    Niels Bohr famously argued that a consistent understanding of quantum mechanics requires a new epistemic framework, which he named complementarity . This position asserts that even in the context of quantum theory, classical concepts must be used to understand and communicate measurement results. The apparent conflict between certain classical descriptions is avoided by recognizing that their application now crucially depends on the measurement context. Recently it has been argued that a new form of complementarity can provide a solution to the so-called information loss paradox. Stephen Hawking argues that the evolution of black holes cannot be described by standard unitary quantum evolution, because such evolution always preserves information, while the evaporation of a black hole will imply that any information that fell into it is irrevocably lost---hence a "paradox." Some researchers in quantum gravity have argued that this paradox can be resolved if one interprets certain seemingly incompatible descriptions of events around black holes as instead being complementary. In this dissertation I assess the extent to which this black hole complementarity can be undergirded by Bohr's account of the limitations of classical concepts. I begin by offering an interpretation of Bohr's complementarity and the role that it plays in his philosophy of quantum theory. After clarifying the nature of classical concepts, I offer an account of the limitations these concepts face, and argue that Bohr's appeal to disturbance is best understood as referring to these conceptual limits. Following preparatory chapters on issues in quantum field theory and black hole mechanics, I offer an analysis of the information loss paradox and various responses to it. I consider the three most prominent accounts of black hole complementarity and argue that they fail to offer sufficient justification for the proposed incompatibility between descriptions. The lesson that emerges from this dissertation is that we have as much to learn from the limitations facing our scientific descriptions as we do from the successes they enjoy. Because all of our scientific theories offer at best limited, effective accounts of the world, an important part of our interpretive efforts will be assessing the borders of these domains of description.

  18. Invasive carnivores alter ecological function and enhance complementarity in scavenger assemblages on ocean beaches.

    PubMed

    Brown, Marion B; Schlacher, Thomas A; Schoeman, David S; Weston, Michael A; Huijbers, Chantal M; Olds, Andrew D; Connolly, Rod M

    2015-10-01

    Species composition is expected to alter ecological function in assemblages if species traits differ strongly. Such effects are often large and persistent for nonnative carnivores invading islands. Alternatively, high similarity in traits within assemblages creates a degree of functional redundancy in ecosystems. Here we tested whether species turnover results in functional ecological equivalence or complementarity, and whether invasive carnivores on islands significantly alter such ecological function. The model system consisted of vertebrate scavengers (dominated by raptors) foraging on animal carcasses on ocean beaches on two Australian islands, one with and one without invasive red foxes (Vulpes vulpes). Partitioning of scavenging events among species, carcass removal rates, and detection speeds were quantified using camera traps baited with fish carcasses at the dune-beach interface. Complete segregation of temporal foraging niches between mammals (nocturnal) and birds (diurnal) reflects complementarity in carrion utilization. Conversely, functional redundancy exists within the bird guild where several species of raptors dominate carrion removal in a broadly similar way. As predicted, effects of red foxes were large. They substantially changed the nature and rate of the scavenging process in the system: (1) foxes consumed over half (55%) of all carrion available at night, compared with negligible mammalian foraging at night on the fox-free island, and (2) significant shifts in the composition of the scavenger assemblages consuming beach-cast carrion are the consequence of fox invasion at one island. Arguably, in the absence of other mammalian apex predators, the addition of red foxes creates a new dimension of functional complementarity in beach food webs. However, this functional complementarity added by foxes is neither benign nor neutral, as marine carrion subsidies to coastal red fox populations are likely to facilitate their persistence as exotic carnivores.

  19. Testing Electrostatic Complementarity in Enzyme Catalysis: Hydrogen Bonding in the Ketosteroid Isomerase Oxyanion Hole

    PubMed Central

    Kraut, Daniel A; Sigala, Paul A; Pybus, Brandon; Liu, Corey W; Ringe, Dagmar; Petsko, Gregory A

    2006-01-01

    A longstanding proposal in enzymology is that enzymes are electrostatically and geometrically complementary to the transition states of the reactions they catalyze and that this complementarity contributes to catalysis. Experimental evaluation of this contribution, however, has been difficult. We have systematically dissected the potential contribution to catalysis from electrostatic complementarity in ketosteroid isomerase. Phenolates, analogs of the transition state and reaction intermediate, bind and accept two hydrogen bonds in an active site oxyanion hole. The binding of substituted phenolates of constant molecular shape but increasing p K a models the charge accumulation in the oxyanion hole during the enzymatic reaction. As charge localization increases, the NMR chemical shifts of protons involved in oxyanion hole hydrogen bonds increase by 0.50–0.76 ppm/p K a unit, suggesting a bond shortening of ˜0.02 Å/p K a unit. Nevertheless, there is little change in binding affinity across a series of substituted phenolates (ΔΔG = −0.2 kcal/mol/p K a unit). The small effect of increased charge localization on affinity occurs despite the shortening of the hydrogen bonds and a large favorable change in binding enthalpy (ΔΔH = −2.0 kcal/mol/p K a unit). This shallow dependence of binding affinity suggests that electrostatic complementarity in the oxyanion hole makes at most a modest contribution to catalysis of ˜300-fold. We propose that geometrical complementarity between the oxyanion hole hydrogen-bond donors and the transition state oxyanion provides a significant catalytic contribution, and suggest that KSI, like other enzymes, achieves its catalytic prowess through a combination of modest contributions from several mechanisms rather than from a single dominant contribution. PMID:16602823

  20. Skill complementarity enhances heterophily in collaboration networks

    PubMed Central

    Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene

    2016-01-01

    Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems. PMID:26743687

  1. No firewalls in quantum gravity: the role of discreteness of quantum geometry in resolving the information loss paradox

    NASA Astrophysics Data System (ADS)

    Perez, Alejandro

    2015-04-01

    In an approach to quantum gravity where space-time arises from coarse graining of fundamentally discrete structures, black hole formation and subsequent evaporation can be described by a unitary evolution without the problems encountered by the standard remnant scenario or the schemes where information is assumed to come out with the radiation during evaporation (firewalls and complementarity). The final state is purified by correlations with the fundamental pre-geometric structures (in the sense of Wheeler), which are available in such approaches, and, like defects in the underlying space-time weave, can carry zero energy.

  2. Weak convergence of a projection algorithm for variational inequalities in a Banach space

    NASA Astrophysics Data System (ADS)

    Iiduka, Hideaki; Takahashi, Wataru

    2008-03-01

    Let C be a nonempty, closed convex subset of a Banach space E. In this paper, motivated by Alber [Ya.I. Alber, Metric and generalized projection operators in Banach spaces: Properties and applications, in: A.G. Kartsatos (Ed.), Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, in: Lecture Notes Pure Appl. Math., vol. 178, Dekker, New York, 1996, pp. 15-50], we introduce the following iterative scheme for finding a solution of the variational inequality problem for an inverse-strongly-monotone operator A in a Banach space: x1=x[set membership, variant]C andxn+1=[Pi]CJ-1(Jxn-[lambda]nAxn) for every , where [Pi]C is the generalized projection from E onto C, J is the duality mapping from E into E* and {[lambda]n} is a sequence of positive real numbers. Then we show a weak convergence theorem (Theorem 3.1). Finally, using this result, we consider the convex minimization problem, the complementarity problem, and the problem of finding a point u[set membership, variant]E satisfying 0=Au.

  3. Functional divergence in nitrogen uptake rates explains diversity-productivity relationship in microalgal communities

    DOE PAGES

    Mandal, Shovon; Shurin, Jonathan B.; Efroymson, Rebecca A.; ...

    2018-05-23

    The relationship between biodiversity and productivity has emerged as a central theme in ecology. Mechanistic explanations for this relationship suggest that the role organisms play in the ecosystem (i.e., niches or functional traits) is a better predictor of ecosystem stability and productivity than taxonomic richness. Here, we tested the capacity of functional diversity in nitrogen uptake in experimental microalgal communities to predict the complementarity effect (CE) and selection effect (SE) of biodiversity on productivity. We grew five algal species as monocultures and as polycultures in pairwise combinations in homogeneous (ammonium, nitrate, or urea alone) and heterogeneous nitrogen (mixed nitrogen) environmentsmore » to determine whether complementarity between species may be enhanced in heterogeneous environments. We show that the positive diversity effects on productivity in heterogeneous environments resulted from complementarity effects with no positive contribution by species–specific SEs. Positive biodiversity effects in homogeneous environments, when present (nitrate and urea treatments but not ammonium), were driven both by CE and SE. Our results suggest that functional diversity increases species complementarity and productivity mainly in heterogeneous resource environments. Furthermore, these results provide evidence that the positive effect of functional diversity on community productivity depends on the diversity of resources present in the environment.« less

  4. Functional divergence in nitrogen uptake rates explains diversity-productivity relationship in microalgal communities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandal, Shovon; Shurin, Jonathan B.; Efroymson, Rebecca A.

    The relationship between biodiversity and productivity has emerged as a central theme in ecology. Mechanistic explanations for this relationship suggest that the role organisms play in the ecosystem (i.e., niches or functional traits) is a better predictor of ecosystem stability and productivity than taxonomic richness. Here, we tested the capacity of functional diversity in nitrogen uptake in experimental microalgal communities to predict the complementarity effect (CE) and selection effect (SE) of biodiversity on productivity. We grew five algal species as monocultures and as polycultures in pairwise combinations in homogeneous (ammonium, nitrate, or urea alone) and heterogeneous nitrogen (mixed nitrogen) environmentsmore » to determine whether complementarity between species may be enhanced in heterogeneous environments. We show that the positive diversity effects on productivity in heterogeneous environments resulted from complementarity effects with no positive contribution by species–specific SEs. Positive biodiversity effects in homogeneous environments, when present (nitrate and urea treatments but not ammonium), were driven both by CE and SE. Our results suggest that functional diversity increases species complementarity and productivity mainly in heterogeneous resource environments. Furthermore, these results provide evidence that the positive effect of functional diversity on community productivity depends on the diversity of resources present in the environment.« less

  5. Functional group diversity of bee pollinators increases crop yield

    PubMed Central

    Hoehn, Patrick; Tscharntke, Teja; Tylianakis, Jason M; Steffan-Dewenter, Ingolf

    2008-01-01

    Niche complementarity is a commonly invoked mechanism underlying the positive relationship between biodiversity and ecosystem functioning, but little empirical evidence exists for complementarity among pollinator species. This study related differences in three functional traits of pollinating bees (flower height preference, daily time of flower visitation and within-flower behaviour) to the seed set of the obligate cross-pollinated pumpkin Cucurbita moschata Duch. ex Poir. across a land-use intensity gradient from tropical rainforest and agroforests to grassland in Indonesia. Bee richness and abundance changed with habitat variables and we used this natural variation to test whether complementary resource use by the diverse pollinator community enhanced final yield. We found that pollinator diversity, but not abundance, was positively related to seed set of pumpkins. Bees showed species-specific spatial and temporal variation in flower visitation traits and within-flower behaviour, allowing for classification into functional guilds. Diversity of functional groups explained even more of the variance in seed set (r2=45%) than did species richness (r2=32%) highlighting the role of functional complementarity. Even though we do not provide experimental, but rather correlative evidence, we can link spatial and temporal complementarity in highly diverse pollinator communities to pollination success in the field, leading to enhanced crop yield without any managed honeybees. PMID:18595841

  6. Low energy electron catalyst: the electronic origin of catalytic strategies.

    PubMed

    Davis, Daly; Sajeev, Y

    2016-10-12

    Using a low energy electron (LEE) as a catalyst, the electronic origin of the catalytic strategies corresponding to substrate selectivity, reaction specificity and reaction rate enhancement is investigated for a reversible unimolecular elementary reaction. An electronic energy complementarity between the catalyst and the substrate molecule is the origin of substrate selectivity and reaction specificity. The electronic energy complementarity is induced by tuning the electronic energy of the catalyst. The energy complementarity maximizes the binding forces between the catalyst and the molecule. Consequently, a new electronically metastable high-energy reactant state and a corresponding new low barrier reaction path are resonantly created for a specific reaction of the substrate through the formation of a catalyst-substrate transient adduct. The LEE catalysis also reveals a fundamental structure-energy correspondence in the formation of the catalyst-substrate transient adduct. Since the energy complementarities corresponding to the substrate molecules of the forward and the backward steps of the reversible reactions are not the same due to their structural differences, the LEE catalyst exhibits a unique one-way catalytic strategy, i.e., the LEE catalyst favors the reversible reaction more effectively in one direction. A characteristic stronger binding of the catalyst to the transition state of the reaction than in the initial reactant state and the final product state is the molecular origin of barrier lowering.

  7. Pupils' over-reliance on linearity: a scholastic effect?

    PubMed

    Van Dooren, Wim; De Bock, Dirk; Janssens, Dirk; Verschaffel, Lieven

    2007-06-01

    From upper elementary education on, children develop a tendency to over-use linearity. Particularly, it is found that many pupils assume that if a figure enlarges k times, the area enlarges k times too. However, most research was conducted with traditional, school-like word problems. This study examines whether pupils also over-use linearity if non-linear problems are embedded in meaningful, authentic performance tasks instead of traditional, school-like word problems, and whether this experience influences later behaviour. Ninety-three sixth graders from two primary schools in Flanders, Belgium. Pupils received a pre-test with traditional word problems. Those who made a linear error on the non-linear area problem were subjected to individual interviews. They received one new non-linear problem, in the S-condition (again a traditional, scholastic word problem), D-condition (the same word problem with a drawing) or P-condition (a meaningful performance-based task). Shortly afterwards, pupils received a post-test, containing again a non-linear word problem. Most pupils from the S-condition displayed linear reasoning during the interview. Offering drawings (D-condition) had a positive effect, but presenting the problem as a performance task (P-condition) was more beneficial. Linear reasoning was nearly absent in the P-condition. Remarkably, at the post-test, most pupils from all three groups again applied linear strategies. Pupils' over-reliance on linearity seems partly elicited by the school-like word problem format of test items. Pupils perform much better if non-linear problems are offered as performance tasks. However, a single experience does not change performances on a comparable word problem test afterwards.

  8. Correlation complementarity yields bell monogamy relations.

    PubMed

    Kurzyński, P; Paterek, T; Ramanathan, R; Laskowski, W; Kaszlikowski, D

    2011-05-06

    We present a method to derive Bell monogamy relations by connecting the complementarity principle with quantum nonlocality. The resulting monogamy relations are stronger than those obtained from the no-signaling principle alone. In many cases, they yield tight quantum bounds on the amount of violation of single and multiple qubit correlation Bell inequalities. In contrast with the two-qubit case, a rich structure of possible violation patterns is shown to exist in the multipartite scenario.

  9. Reviving Complementarity: John Wheeler's efforts to apply complementarity toward a quantum description of gravitation

    NASA Astrophysics Data System (ADS)

    Halpern, Paul

    2017-01-01

    In 1978, John Wheeler proposed the delayed-choice thought experiment as a generalization of the classic double slit experiment intended to help elucidate the nature of decision making in quantum measurement. In particular, he wished to illustrate how a decision made after a quantum system was prepared might retrospectively affect the outcome. He extended his methods to the universe itself, raising the question of whether the universe is a ``self-excited circuit'' in which scientific measurements in the present affect the quantum dynamics in the past. In this talk we'll show how Wheeler's approach revived the notion of Bohr's complementarity, which had by then faded from the prevailing discourse of quantum measurement theory. Wheeler's advocacy reflected, in part, his wish to eliminate the divide in quantum theory between measurer and what was being measured, bringing greater consistency to the ideas of Bohr, a mentor whom he deeply respected.

  10. Bohr, Heisenberg and the divergent views of complementarity

    NASA Astrophysics Data System (ADS)

    Camilleri, Kristian

    The fractious discussions between Bohr and Heisenberg in Copenhagen in 1927 have been the subject of much historical scholarship. However, little attention has been given to Heisenberg's understanding of the notion of complementary space-time and causal descriptions, which was presented for the first time in Bohr's lecture at the 1927 Como conference. In this paper, I argue that Heisenberg's own interpretation of this notion differed substantially from Bohr's. Whereas Bohr had intended this form of complementarity to entail a choice between a space-time description of the electron in an atom, and defining the energy of a stationary state, Heisenberg interpreted the 'causal' description in terms of ψ -function in configuration space. In disentangling the two views of complementarity, this paper sheds new light on the hidden philosophical disagreements between the proponents of these two founders of the so-called 'Copenhagen interpretation' of quantum mechanics.

  11. Regularized finite element modeling of progressive failure in soils within nonlocal softening plasticity

    NASA Astrophysics Data System (ADS)

    Huang, Maosong; Qu, Xie; Lü, Xilin

    2017-11-01

    By solving a nonlinear complementarity problem for the consistency condition, an improved implicit stress return iterative algorithm for a generalized over-nonlocal strain softening plasticity was proposed, and the consistent tangent matrix was obtained. The proposed algorithm was embodied into existing finite element codes, and it enables the nonlocal regularization of ill-posed boundary value problem caused by the pressure independent and dependent strain softening plasticity. The algorithm was verified by the numerical modeling of strain localization in a plane strain compression test. The results showed that a fast convergence can be achieved and the mesh-dependency caused by strain softening can be effectively eliminated. The influences of hardening modulus and material characteristic length on the simulation were obtained. The proposed algorithm was further used in the simulations of the bearing capacity of a strip footing; the results are mesh-independent, and the progressive failure process of the soil was well captured.

  12. Qualification of a Quantitative Method for Monitoring Aspartate Isomerization of a Monoclonal Antibody by Focused Peptide Mapping.

    PubMed

    Cao, Mingyan; Mo, Wenjun David; Shannon, Anthony; Wei, Ziping; Washabaugh, Michael; Cash, Patricia

    Aspartate (Asp) isomerization is a common post-translational modification of recombinant therapeutic proteins that can occur during manufacturing, storage, or administration. Asp isomerization in the complementarity-determining regions of a monoclonal antibody may affect the target binding and thus a sufficiently robust quality control method for routine monitoring is desirable. In this work, we utilized a liquid chromatography-mass spectrometry (LC/MS)-based approach to identify the Asp isomerization in the complementarity-determining regions of a therapeutic monoclonal antibody. To quantitate the site-specific Asp isomerization of the monoclonal antibody, a UV detection-based quantitation assay utilizing the same LC platform was developed. The assay was qualified and implemented for routine monitoring of this product-specific modification. Compared with existing methods, this analytical paradigm is applicable to identify Asp isomerization (or other modifications) and subsequently develop a rapid, sufficiently robust quality control method for routine site-specific monitoring and quantitation to ensure product quality. This approach first identifies and locates a product-related impurity (a critical quality attribute) caused by isomerization, deamidation, oxidation, or other post-translational modifications, and then utilizes synthetic peptides and MS to assist the development of a LC-UV-based chromatographic method that separates and quantifies the product-related impurities by UV peaks. The established LC-UV method has acceptable peak specificity, precision, linearity, and accuracy; it can be validated and used in a good manufacturing practice environment for lot release and stability testing. Aspartate isomerization is a common post-translational modification of recombinant proteins during manufacture process and storage. Isomerization in the complementarity-determining regions (CDRs) of a monoclonal antibody A (mAb-A) has been detected and has been shown to have impact on the binding affinity to the antigen. In this work, we utilized a mass spectrometry-based peptide mapping approach to detect and quantitate the Asp isomerization in the CDRs of mAb-A. To routinely monitor the CDR isomerization of mAb-A, a focused peptide mapping method utilizing reversed phase chromatographic separation and UV detection has been developed and qualified. This approach is generally applicable to monitor isomerization and other post-translational modifications of proteins in a specific and high-throughput mode to ensure product quality. © PDA, Inc. 2016.

  13. Atoms-in-molecules study of the genetically encoded amino acids. III. Bond and atomic properties and their correlations with experiment including mutation-induced changes in protein stability and genetic coding.

    PubMed

    Matta, Chérif F; Bader, Richard F W

    2003-08-15

    This article presents a study of the molecular charge distributions of the genetically encoded amino acids (AA), one that builds on the previous determination of their equilibrium geometries and the demonstrated transferability of their common geometrical parameters. The properties of the charge distributions are characterized and given quantitative expression in terms of the bond and atomic properties determined within the quantum theory of atoms-in-molecules (QTAIM) that defines atoms and bonds in terms of the observable charge density. The properties so defined are demonstrated to be remarkably transferable, a reflection of the underlying transferability of the charge distributions of the main chain and other groups common to the AA. The use of the atomic properties in obtaining an understanding of the biological functions of the AA, whether free or bound in a polypeptide, is demonstrated by the excellent statistical correlations they yield with experimental physicochemical properties. A property of the AA side chains of particular importance is the charge separation index (CSI), a quantity previously defined as the sum of the magnitudes of the atomic charges and which measures the degree of separation of positive and negative charges in the side chain of interest. The CSI values provide a correlation with the measured free energies of transfer of capped side chain analogues, from the vapor phase to aqueous solution, yielding a linear regression equation with r2 = 0.94. The atomic volume is defined by the van der Waals isodensity surface and it, together with the CSI, which accounts for the electrostriction of the solvent, yield a linear regression (r2 = 0.98) with the measured partial molar volumes of the AAs. The changes in free energies of transfer from octanol to water upon interchanging 153 pairs of AAs and from cyclohexane to water upon interchanging 190 pairs of AAs, were modeled using only three calculated parameters (representing electrostatic and volume contributions) yielding linear regressions with r2 values of 0.78 and 0.89, respectively. These results are a prelude to the single-site mutation-induced changes in the stabilities of two typical proteins: ubiquitin and staphylococcal nuclease. Strong quadratic correlations (r2 approximately 0.9) were obtained between DeltaCSI upon mutation and each of the two terms DeltaDeltaH and TDeltaDeltaS taken from recent and accurate differential scanning calorimetry experiments on ubiquitin. When the two terms are summed to yield DeltaDeltaG, the quadratic terms nearly cancel, and the result is a simple linear fit between DeltaDeltaG and DeltaCSI with r2 = 0.88. As another example, the change in the stability of staphylococcal nuclease upon mutation has been fitted linearly (r2 = 0.83) to the sum of a DeltaCSI term and a term representing the change in the van der Waals volume of the side chains upon mutation. The suggested correlation of the polarity of the side chain with the second letter of the AA triplet genetic codon is given concrete expression in a classification of the side chains in terms of their CSI values and their group dipole moments. For example, all amino acids with a pyrimidine base as their second letter in mRNA possess side-chain CSI < or = 2.8 (with the exception of Cys), whereas all those with CSI > 2.8 possess an purine base. The article concludes with two proposals for measuring and predicting molecular complementarity: van der Waals complementarity expressed in terms of the van der Waals isodensity surface and Lewis complementarity expressed in terms of the local charge concentrations and depletions defined by the topology of the Laplacian of the electron density. A display of the experimentally accessible Laplacian distribution for a folded protein would offer a clear picture of the operation of the "stereochemical code" proposed as the determinant in the folding process. Copyright 2003 Wiley-Liss, Inc.

  14. COMplementary Primer ASymmetric PCR (COMPAS-PCR) Applied to the Identification of Salmo salar, Salmo trutta and Their Hybrids

    PubMed Central

    2016-01-01

    Avoiding complementarity between primers when designing a PCR assay constitutes a central rule strongly anchored in the mind of the molecular scientist. 3’-complementarity will extend the primers during PCR elongation using one another as template, consequently disabling further possible involvement in traditional target amplification. However, a 5’-complementarity will leave the primers unchanged during PCR cycles, albeit sequestered to one another, therefore also suppressing target amplification. We show that 5’-complementarity between primers may be exploited in a new PCR method called COMplementary-Primer-Asymmetric (COMPAS)-PCR, using asymmetric primer concentrations to achieve target PCR amplification. Moreover, such a design may paradoxically reduce spurious non-target amplification by actively sequestering the limiting primer. The general principles were demonstrated using 5S rDNA direct repeats as target sequences to design a species-specific assay for identifying Salmo salar and Salmo trutta using almost fully complementary primers overlapping the same target sequence. Specificity was enhanced by using 3’-penultimate point mutations and the assay was further developed to enable identification of S. salar x S. trutta hybrids by High Resolution Melt analysis in a 35 min one-tube assay. This small paradigm shift, using highly complementary primers for PCR, should help develop robust assays that previously would not be considered. PMID:27783658

  15. A stochastic equilibrium model for the North American natural gas market

    NASA Astrophysics Data System (ADS)

    Zhuang, Jifang

    This dissertation is an endeavor in the field of energy modeling for the North American natural gas market using a mixed complementarity formulation combined with the stochastic programming. The genesis of the stochastic equilibrium model presented in this dissertation is the deterministic market equilibrium model developed in [Gabriel, Kiet and Zhuang, 2005]. Based on some improvements that we made to this model, including proving new existence and uniqueness results, we present a multistage stochastic equilibrium model with uncertain demand for the deregulated North American natural gas market using the recourse method of the stochastic programming. The market participants considered by the model are pipeline operators, producers, storage operators, peak gas operators, marketers and consumers. Pipeline operators are described with regulated tariffs but also involve "congestion pricing" as a mechanism to allocate scarce pipeline capacity. Marketers are modeled as Nash-Cournot players in sales to the residential and commercial sectors but price-takers in all other aspects. Consumers are represented by demand functions in the marketers' problem. Producers, storage operators and peak gas operators are price-takers consistent with perfect competition. Also, two types of the natural gas markets are included: the long-term and spot markets. Market participants make both high-level planning decisions (first-stage decisions) in the long-term market and daily operational decisions (recourse decisions) in the spot market subject to their engineering, resource and political constraints, resource constraints as well as market constraints on both the demand and the supply side, so as to simultaneously maximize their expected profits given others' decisions. The model is shown to be an instance of a mixed complementarity problem (MiCP) under minor conditions. The MiCP formulation is derived from applying the Karush-Kuhn-Tucker optimality conditions of the optimization problems faced by the market participants. Some theoretical results regarding the market prices in both markets are shown. We also illustrate the model on a representative, sample network of two production nodes, two consumption nodes with discretely distributed end-user demand and three seasons using four cases.

  16. CPdock: the complementarity plot for docking of proteins: implementing multi-dielectric continuum electrostatics.

    PubMed

    Basu, Sankar

    2017-12-07

    The complementarity plot (CP) is an established validation tool for protein structures, applicable to both globular proteins (folding) as well as protein-protein complexes (binding). It computes the shape and electrostatic complementarities (S m , E m ) for amino acid side-chains buried within the protein interior or interface and plots them in a two-dimensional plot having knowledge-based probabilistic quality estimates for the residues as well as for the whole structure. The current report essentially presents an upgraded version of the plot with the implementation of the advanced multi-dielectric functionality (as in Delphi version 6.2 or higher) in the computation of electrostatic complementarity to make the validation tool physico-chemically more realistic. The two methods (single- and multi-dielectric) agree decently in their resultant E m values, and hence, provisions for both methods have been kept in the software suite. So to speak, the global electrostatic balance within a well-folded protein and/or a well-packed interface seems only marginally perturbed by the choice of different internal dielectric values. However, both from theoretical as well as practical grounds, the more advanced multi-dielectric version of the plot is certainly recommended for potentially producing more reliable results. The report also presents a new methodology and a variant plot, namely CP dock , based on the same principles of complementarity specifically designed to be used in the docking of proteins. The efficacy of the method to discriminate between good and bad docked protein complexes has been tested on a recent state-of-the-art docking benchmark. The results unambiguously indicate that CP dock can indeed be effective in the initial screening phase of a docking scoring pipeline before going into more sophisticated and computationally expensive scoring functions. CP dock has been made available at https://github.com/nemo8130/CPdock . Graphical Abstract An example showing the efficacy of CP dock to be used in the initial screening phase of a protein-protein docking scoring pipeline.

  17. Frequency-Independent Response of Self-Complementary Checkerboard Screens

    NASA Astrophysics Data System (ADS)

    Urade, Yoshiro; Nakata, Yosuke; Nakanishi, Toshihiro; Kitano, Masao

    2015-06-01

    This research resolves a long-standing problem on the electromagnetic response of self-complementary metallic screens with checkerboardlike geometry. Although Babinet's principle implies that they show a frequency-independent response, this unusual characteristic has not been observed yet due to the singularities of the metallic point contacts in the checkerboard geometry. We overcome this difficulty by replacing the point contacts with resistive sheets. The proposed structure is prepared and characterized by terahertz time-domain spectroscopy. It is experimentally confirmed that the resistive checkerboard structures exhibit a flat transmission spectrum over 0.1-1.1 THz. It is also demonstrated that self-complementarity can eliminate even the frequency-dependent transmission characteristics of resonant metamaterials.

  18. Antiandrogenic steroidal sulfonyl heterocycles. Utility of electrostatic complementarity in defining bioisosteric sulfonyl heterocycles.

    PubMed

    Mallamo, J P; Pilling, G M; Wetzel, J R; Kowalczyk, P J; Bell, M R; Kullnig, R K; Batzold, F H; Juniewicz, P E; Winneker, R C; Luss, H R

    1992-05-15

    Complementarity of electrostatic potential surface maps was utilized in defining bioisosteric steroidal androgen receptor antagonists. Semiempirical and ab initio level calculations performed on a series of methanesulfonyl heterocycles indicated the requirement for a partial negative charge at the heteroatom attached to C-3 of the steroid nucleus to attain androgen receptor affinity. Synthesis and testing of six heterocycle A-ring-fused dihydroethisterone derivatives support this hypothesis, and we have identified two new androgen receptor antagonists of this class.

  19. Hybrid ququart-encoded quantum cryptography protected by Kochen-Specker contextuality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabello, Adan; Department of Physics, Stockholm University, S-10691 Stockholm; D'Ambrosio, Vincenzo

    2011-09-15

    Quantum cryptographic protocols based on complementarity are not secure against attacks in which complementarity is imitated with classical resources. The Kochen-Specker (KS) theorem provides protection against these attacks, without requiring entanglement or spatially separated composite systems. We analyze the maximum tolerated noise to guarantee the security of a KS-protected cryptographic scheme against these attacks and describe a photonic realization of this scheme using hybrid ququarts defined by the polarization and orbital angular momentum of single photons.

  20. Wave-particle dualism and complementarity unraveled by a different mode

    PubMed Central

    Menzel, Ralf; Puhlmann, Dirk; Heuer, Axel; Schleich, Wolfgang P.

    2012-01-01

    The precise knowledge of one of two complementary experimental outcomes prevents us from obtaining complete information about the other one. This formulation of Niels Bohr’s principle of complementarity when applied to the paradigm of wave-particle dualism—that is, to Young’s double-slit experiment—implies that the information about the slit through which a quantum particle has passed erases interference. In the present paper we report a double-slit experiment using two photons created by spontaneous parametric down-conversion where we observe interference in the signal photon despite the fact that we have located it in one of the slits due to its entanglement with the idler photon. This surprising aspect of complementarity comes to light by our special choice of the TEM01 pump mode. According to quantum field theory the signal photon is then in a coherent superposition of two distinct wave vectors giving rise to interference fringes analogous to two mechanical slits. PMID:22628561

  1. Profiling charge complementarity and selectivity for binding at the protein surface.

    PubMed

    Sulea, Traian; Purisima, Enrico O

    2003-05-01

    A novel analysis and representation of the protein surface in terms of electrostatic binding complementarity and selectivity is presented. The charge optimization methodology is applied in a probe-based approach that simulates the binding process to the target protein. The molecular surface is color coded according to calculated optimal charge or according to charge selectivity, i.e., the binding cost of deviating from the optimal charge. The optimal charge profile depends on both the protein shape and charge distribution whereas the charge selectivity profile depends only on protein shape. High selectivity is concentrated in well-shaped concave pockets, whereas solvent-exposed convex regions are not charge selective. This suggests the synergy of charge and shape selectivity hot spots toward molecular selection and recognition, as well as the asymmetry of charge selectivity at the binding interface of biomolecular systems. The charge complementarity and selectivity profiles map relevant electrostatic properties in a readily interpretable way and encode information that is quite different from that visualized in the standard electrostatic potential map of unbound proteins.

  2. Complementarity of PALM and SOFI for super-resolution live-cell imaging of focal adhesions

    PubMed Central

    Deschout, Hendrik; Lukes, Tomas; Sharipov, Azat; Szlag, Daniel; Feletti, Lely; Vandenberg, Wim; Dedecker, Peter; Hofkens, Johan; Leutenegger, Marcel; Lasser, Theo; Radenovic, Aleksandra

    2016-01-01

    Live-cell imaging of focal adhesions requires a sufficiently high temporal resolution, which remains a challenge for super-resolution microscopy. Here we address this important issue by combining photoactivated localization microscopy (PALM) with super-resolution optical fluctuation imaging (SOFI). Using simulations and fixed-cell focal adhesion images, we investigate the complementarity between PALM and SOFI in terms of spatial and temporal resolution. This PALM-SOFI framework is used to image focal adhesions in living cells, while obtaining a temporal resolution below 10 s. We visualize the dynamics of focal adhesions, and reveal local mean velocities around 190 nm min−1. The complementarity of PALM and SOFI is assessed in detail with a methodology that integrates a resolution and signal-to-noise metric. This PALM and SOFI concept provides an enlarged quantitative imaging framework, allowing unprecedented functional exploration of focal adhesions through the estimation of molecular parameters such as fluorophore densities and photoactivation or photoswitching kinetics. PMID:27991512

  3. Complementarity of PALM and SOFI for super-resolution live-cell imaging of focal adhesions

    NASA Astrophysics Data System (ADS)

    Deschout, Hendrik; Lukes, Tomas; Sharipov, Azat; Szlag, Daniel; Feletti, Lely; Vandenberg, Wim; Dedecker, Peter; Hofkens, Johan; Leutenegger, Marcel; Lasser, Theo; Radenovic, Aleksandra

    2016-12-01

    Live-cell imaging of focal adhesions requires a sufficiently high temporal resolution, which remains a challenge for super-resolution microscopy. Here we address this important issue by combining photoactivated localization microscopy (PALM) with super-resolution optical fluctuation imaging (SOFI). Using simulations and fixed-cell focal adhesion images, we investigate the complementarity between PALM and SOFI in terms of spatial and temporal resolution. This PALM-SOFI framework is used to image focal adhesions in living cells, while obtaining a temporal resolution below 10 s. We visualize the dynamics of focal adhesions, and reveal local mean velocities around 190 nm min-1. The complementarity of PALM and SOFI is assessed in detail with a methodology that integrates a resolution and signal-to-noise metric. This PALM and SOFI concept provides an enlarged quantitative imaging framework, allowing unprecedented functional exploration of focal adhesions through the estimation of molecular parameters such as fluorophore densities and photoactivation or photoswitching kinetics.

  4. Complementarity of PALM and SOFI for super-resolution live-cell imaging of focal adhesions.

    PubMed

    Deschout, Hendrik; Lukes, Tomas; Sharipov, Azat; Szlag, Daniel; Feletti, Lely; Vandenberg, Wim; Dedecker, Peter; Hofkens, Johan; Leutenegger, Marcel; Lasser, Theo; Radenovic, Aleksandra

    2016-12-19

    Live-cell imaging of focal adhesions requires a sufficiently high temporal resolution, which remains a challenge for super-resolution microscopy. Here we address this important issue by combining photoactivated localization microscopy (PALM) with super-resolution optical fluctuation imaging (SOFI). Using simulations and fixed-cell focal adhesion images, we investigate the complementarity between PALM and SOFI in terms of spatial and temporal resolution. This PALM-SOFI framework is used to image focal adhesions in living cells, while obtaining a temporal resolution below 10 s. We visualize the dynamics of focal adhesions, and reveal local mean velocities around 190 nm min -1 . The complementarity of PALM and SOFI is assessed in detail with a methodology that integrates a resolution and signal-to-noise metric. This PALM and SOFI concept provides an enlarged quantitative imaging framework, allowing unprecedented functional exploration of focal adhesions through the estimation of molecular parameters such as fluorophore densities and photoactivation or photoswitching kinetics.

  5. Supramolecular inorganic species: An expedition into a fascinating, rather unknown land mesoscopia with interdisciplinary expectations and discoveries

    NASA Astrophysics Data System (ADS)

    Müller, A.

    1994-09-01

    One of the basic problems in science is the understanding of the potentialities of material systems, a topic which is of relevance for disciplines ranging from natural philosophy over topology and/or structural chemistry, and biology ( morphogenesis) to materials science. Information on this problem can be obtained by studying the different types of linking of basic fragments in self-assembly processes, a type of reaction which has proved to be one of the most important in the biological and material world. The outlined problem can be nicely studied in the case of polyoxometalates with reference to basic organizing principles of material systems like conservative self-organization ( self-assembly), host—guest interactions, complementarity, molecular recognition, emergence vs. reduction ( as a dialectic unit), template-direction, exchange-interactions and, in general, the mesoscopic material world with its unusual properties as well as its topological and/or structural diversity. Science will lose in significance as an interdisciplinary unit — as outlined or maybe predicted here — should not more importance be attached to general aspects in the future.

  6. Electrostatics in protein–protein docking

    PubMed Central

    Heifetz, Alexander; Katchalski-Katzir, Ephraim; Eisenstein, Miriam

    2002-01-01

    A novel geometric-electrostatic docking algorithm is presented, which tests and quantifies the electrostatic complementarity of the molecular surfaces together with the shape complementarity. We represent each molecule to be docked as a grid of complex numbers, storing information regarding the shape of the molecule in the real part and information regarding the electrostatic character of the molecule in the imaginary part. The electrostatic descriptors are derived from the electrostatic potential of the molecule. Thus, the electrostatic character of the molecule is represented as patches of positive, neutral, or negative values. The potential for each molecule is calculated only once and stored as potential spheres adequate for exhaustive rotation/translation scans. The geometric-electrostatic docking algorithm is applied to 17 systems, starting form the structures of the unbound molecules. The results—in terms of the complementarity scores of the nearly correct solutions, their ranking in the lists of sorted solutions, and their statistical uniqueness—are compared with those of geometric docking, showing that the inclusion of electrostatic complementarity in docking is very important, in particular in docking of unbound structures. Based on our results, we formulate several "good electrostatic docking rules": The geometric-electrostatic docking procedure is more successful than geometric docking when the potential patches are large and when the potential extends away from the molecular surface and protrudes into the solvent. In contrast, geometric docking is recommended when the electrostatic potential around the molecules to be docked appears homogenous, that is, with a similar sign all around the molecule. PMID:11847280

  7. Evidence for Context-Dependent Complementarity of Non-Shine-Dalgarno Ribosome Binding Sites to Escherichia coli rRNA

    PubMed Central

    Barendt, Pamela A.; Shah, Najaf A.; Barendt, Gregory A.; Kothari, Parth A.; Sarkar, Casim A.

    2013-01-01

    While the ribosome has evolved to function in complex intracellular environments, these contexts do not easily allow for the study of its inherent capabilities. We have used a synthetic, well-defined, Escherichia coli (E. coli)-based translation system in conjunction with ribosome display, a powerful in vitro selection method, to identify ribosome binding sites (RBSs) that can promote the efficient translation of messenger RNAs (mRNAs) with a leader length representative of natural E. coli mRNAs. In previous work, we used a longer leader sequence and unexpectedly recovered highly efficient cytosine-rich sequences with complementarity to the 16S ribosomal RNA (rRNA) and similarity to eukaryotic RBSs. In the current study, Shine-Dalgarno (SD) sequences were prevalent but non-SD sequences were also heavily enriched and were dominated by novel guanine- and uracil-rich motifs which showed statistically significant complementarity to the 16S rRNA. Additionally, only SD motifs exhibited position-dependent decreases in sequence entropy, indicating that non-SD motifs likely operate by increasing the local concentration of ribosomes in the vicinity of the start codon, rather than by a position-dependent mechanism. These results further support the putative generality of mRNA-rRNA complementarity in facilitating mRNA translation, but also suggest that context (e.g., leader length and composition) dictates the specific subset of possible RBSs that are used for efficient translation of a given transcript. PMID:23427812

  8. High-Performance Modeling and Simulation of Anchoring in Granular Media for NEO Applications

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco B.; Jain, Abhinandan; Negrut, Dan; Mazhar, Hammad

    2012-01-01

    NASA is interested in designing a spacecraft capable of visiting a near-Earth object (NEO), performing experiments, and then returning safely. Certain periods of this mission would require the spacecraft to remain stationary relative to the NEO, in an environment characterized by very low gravity levels; such situations require an anchoring mechanism that is compact, easy to deploy, and upon mission completion, easy to remove. The design philosophy used in this task relies on the simulation capability of a high-performance multibody dynamics physics engine. On Earth, it is difficult to create low-gravity conditions, and testing in low-gravity environments, whether artificial or in space, can be costly and very difficult to achieve. Through simulation, the effect of gravity can be controlled with great accuracy, making it ideally suited to analyze the problem at hand. Using Chrono::Engine, a simulation pack age capable of utilizing massively parallel Graphic Processing Unit (GPU) hardware, several validation experiments were performed. Modeling of the regolith interaction has been carried out, after which the anchor penetration tests were performed and analyzed. The regolith was modeled by a granular medium composed of very large numbers of convex three-dimensional rigid bodies, subject to microgravity levels and interacting with each other with contact, friction, and cohesional forces. The multibody dynamics simulation approach used for simulating anchors penetrating a soil uses a differential variational inequality (DVI) methodology to solve the contact problem posed as a linear complementarity method (LCP). Implemented within a GPU processing environment, collision detection is greatly accelerated compared to traditional CPU (central processing unit)- based collision detection. Hence, systems of millions of particles interacting with complex dynamic systems can be efficiently analyzed, and design recommendations can be made in a much shorter time. The figure shows an example of this capability where the Brazil Nut problem is simulated: as the container full of granular material is vibrated, the large ball slowly moves upwards. This capability was expanded to account for anchors of different shapes and penetration velocities, interacting with granular soils.

  9. Bacterial biodiversity-ecosystem functioning relations are modified by environmental complexity.

    PubMed

    Langenheder, Silke; Bulling, Mark T; Solan, Martin; Prosser, James I

    2010-05-26

    With the recognition that environmental change resulting from anthropogenic activities is causing a global decline in biodiversity, much attention has been devoted to understanding how changes in biodiversity may alter levels of ecosystem functioning. Although environmental complexity has long been recognised as a major driving force in evolutionary processes, it has only recently been incorporated into biodiversity-ecosystem functioning investigations. Environmental complexity is expected to strengthen the positive effect of species richness on ecosystem functioning, mainly because it leads to stronger complementarity effects, such as resource partitioning and facilitative interactions among species when the number of available resource increases. Here we implemented an experiment to test the combined effect of species richness and environmental complexity, more specifically, resource richness on ecosystem functioning over time. We show, using all possible combinations of species within a bacterial community consisting of six species, and all possible combinations of three substrates, that diversity-functioning (metabolic activity) relationships change over time from linear to saturated. This was probably caused by a combination of limited complementarity effects and negative interactions among competing species as the experiment progressed. Even though species richness and resource richness both enhanced ecosystem functioning, they did so independently from each other. Instead there were complex interactions between particular species and substrate combinations. Our study shows clearly that both species richness and environmental complexity increase ecosystem functioning. The finding that there was no direct interaction between these two factors, but that instead rather complex interactions between combinations of certain species and resources underlie positive biodiversity ecosystem functioning relationships, suggests that detailed knowledge of how individual species interact with complex natural environments will be required in order to make reliable predictions about how altered levels of biodiversity will most likely affect ecosystem functioning.

  10. Bacterial Biodiversity-Ecosystem Functioning Relations Are Modified by Environmental Complexity

    PubMed Central

    Langenheder, Silke; Bulling, Mark T.; Solan, Martin; Prosser, James I.

    2010-01-01

    Background With the recognition that environmental change resulting from anthropogenic activities is causing a global decline in biodiversity, much attention has been devoted to understanding how changes in biodiversity may alter levels of ecosystem functioning. Although environmental complexity has long been recognised as a major driving force in evolutionary processes, it has only recently been incorporated into biodiversity-ecosystem functioning investigations. Environmental complexity is expected to strengthen the positive effect of species richness on ecosystem functioning, mainly because it leads to stronger complementarity effects, such as resource partitioning and facilitative interactions among species when the number of available resource increases. Methodology/Principal Findings Here we implemented an experiment to test the combined effect of species richness and environmental complexity, more specifically, resource richness on ecosystem functioning over time. We show, using all possible combinations of species within a bacterial community consisting of six species, and all possible combinations of three substrates, that diversity-functioning (metabolic activity) relationships change over time from linear to saturated. This was probably caused by a combination of limited complementarity effects and negative interactions among competing species as the experiment progressed. Even though species richness and resource richness both enhanced ecosystem functioning, they did so independently from each other. Instead there were complex interactions between particular species and substrate combinations. Conclusions/Significance Our study shows clearly that both species richness and environmental complexity increase ecosystem functioning. The finding that there was no direct interaction between these two factors, but that instead rather complex interactions between combinations of certain species and resources underlie positive biodiversity ecosystem functioning relationships, suggests that detailed knowledge of how individual species interact with complex natural environments will be required in order to make reliable predictions about how altered levels of biodiversity will most likely affect ecosystem functioning. PMID:20520808

  11. Menu-Driven Solver Of Linear-Programming Problems

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.; Ferencz, D.

    1992-01-01

    Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).

  12. MEGADOCK: An All-to-All Protein-Protein Interaction Prediction System Using Tertiary Structure Data

    PubMed Central

    Ohue, Masahito; Matsuzaki, Yuri; Uchikoga, Nobuyuki; Ishida, Takashi; Akiyama, Yutaka

    2014-01-01

    The elucidation of protein-protein interaction (PPI) networks is important for understanding cellular structure and function and structure-based drug design. However, the development of an effective method to conduct exhaustive PPI screening represents a computational challenge. We have been investigating a protein docking approach based on shape complementarity and physicochemical properties. We describe here the development of the protein-protein docking software package “MEGADOCK” that samples an extremely large number of protein dockings at high speed. MEGADOCK reduces the calculation time required for docking by using several techniques such as a novel scoring function called the real Pairwise Shape Complementarity (rPSC) score. We showed that MEGADOCK is capable of exhaustive PPI screening by completing docking calculations 7.5 times faster than the conventional docking software, ZDOCK, while maintaining an acceptable level of accuracy. When MEGADOCK was applied to a subset of a general benchmark dataset to predict 120 relevant interacting pairs from 120 x 120 = 14,400 combinations of proteins, an F-measure value of 0.231 was obtained. Further, we showed that MEGADOCK can be applied to a large-scale protein-protein interaction-screening problem with accuracy better than random. When our approach is combined with parallel high-performance computing systems, it is now feasible to search and analyze protein-protein interactions while taking into account three-dimensional structures at the interactome scale. MEGADOCK is freely available at http://www.bi.cs.titech.ac.jp/megadock. PMID:23855673

  13. An augmented Lagrangian trust region method for inclusion boundary reconstruction using ultrasound/electrical dual-modality tomography

    NASA Astrophysics Data System (ADS)

    Liang, Guanghui; Ren, Shangjie; Dong, Feng

    2018-07-01

    The ultrasound/electrical dual-modality tomography utilizes the complementarity of ultrasound reflection tomography (URT) and electrical impedance tomography (EIT) to improve the speed and accuracy of image reconstruction. Due to its advantages of no-invasive, no-radiation and low-cost, ultrasound/electrical dual-modality tomography has attracted much attention in the field of dual-modality imaging and has many potential applications in industrial and biomedical imaging. However, the data fusion of URT and EIT is difficult due to their different theoretical foundations and measurement principles. The most commonly used data fusion strategy in ultrasound/electrical dual-modality tomography is incorporating the structured information extracted from the URT into the EIT image reconstruction process through a pixel-based constraint. Due to the inherent non-linearity and ill-posedness of EIT, the reconstructed images from the strategy suffer from the low resolution, especially at the boundary of the observed inclusions. To improve this condition, an augmented Lagrangian trust region method is proposed to directly reconstruct the shapes of the inclusions from the ultrasound/electrical dual-modality measurements. In the proposed method, the shape of the target inclusion is parameterized by a radial shape model whose coefficients are used as the shape parameters. Then, the dual-modality shape inversion problem is formulated by an energy minimization problem in which the energy function derived from EIT is constrained by an ultrasound measurements model through an equality constraint equation. Finally, the optimal shape parameters associated with the optimal inclusion shape guesses are determined by minimizing the constrained cost function using the augmented Lagrangian trust region method. To evaluate the proposed method, numerical tests are carried out. Compared with single modality EIT, the proposed dual-modality inclusion boundary reconstruction method has a higher accuracy and is more robust to the measurement noise.

  14. The intelligence of dual simplex method to solve linear fractional fuzzy transportation problem.

    PubMed

    Narayanamoorthy, S; Kalyani, S

    2015-01-01

    An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example.

  15. General monogamy equalities of complementarity relation and distributive entanglement for multi-qubit pure states

    NASA Astrophysics Data System (ADS)

    Zha, Xinwei; Da, Zhang; Ahmed, Irfan; Zhang, Dan; Zhang, Yanpeng

    2018-02-01

    In this paper, we determine the complementarity relations for pure quantum states of N qubits by presenting the definition of local and non-local forms. By comparing the entanglement monogamy equality proposed by Coffman, Kundu, and Wootters, we prove that there exist strict monogamy laws for quantum correlations in all many-qubit systems. Further, the proper form of general entanglement monogamy equality for arbitrary quantum states is found with the characterization of total quantum correlation of qubits. These results may open a new window for multi-qubit entanglement.

  16. The Intelligence of Dual Simplex Method to Solve Linear Fractional Fuzzy Transportation Problem

    PubMed Central

    Narayanamoorthy, S.; Kalyani, S.

    2015-01-01

    An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example. PMID:25810713

  17. Testing the statistical compatibility of independent data sets

    NASA Astrophysics Data System (ADS)

    Maltoni, M.; Schwetz, T.

    2003-08-01

    We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed.

  18. Design of Linear Quadratic Regulators and Kalman Filters

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Geyser, L.

    1986-01-01

    AESOP solves problems associated with design of controls and state estimators for linear time-invariant systems. Systems considered are modeled in state-variable form by set of linear differential and algebraic equations with constant coefficients. Two key problems solved by AESOP are linear quadratic regulator (LQR) design problem and steady-state Kalman filter design problem. AESOP is interactive. User solves design problems and analyzes solutions in single interactive session. Both numerical and graphical information available to user during the session.

  19. Determining the mechanism by which fish diversity influences production.

    PubMed

    Carey, Michael P; Wahl, David H

    2011-09-01

    Understanding the ability of biodiversity to govern ecosystem function is essential with current pressures on natural communities from species invasions and extirpations. Changes in fish communities can be a major determinant of food web dynamics, and even small shifts in species composition or richness can translate into large effects on ecosystems. In addition, there is a large information gap in extrapolating results of small-scale biodiversity-ecosystem function experiments to natural systems with realistic environmental complexity. Thus, we tested the key mechanisms (resource complementarity and selection effect) for biodiversity to influence fish production in mesocosms and ponds. Fish diversity treatments were created by replicating species richness and species composition within each richness level. In mesocosms, increasing richness had a positive effect on fish biomass with an overyielding pattern indicating species mixtures were more productive than any individual species. Additive partitioning confirmed a positive net effect of biodiversity driven by a complementarity effect. Productivity was less affected by species diversity when species were more similar. Thus, the primary mechanism driving fish production in the mesocosms was resource complementarity. In the ponds, the mechanism driving fish production changed through time. The key mechanism was initially resource complementarity until production was influenced by the selection effect. Varying strength of intraspecific interactions resulting from differences in resource levels and heterogeneity likely caused differences in mechanisms between the mesocosm and pond experiments, as well as changes through time in the ponds. Understanding the mechanisms by which fish diversity governs ecosystem function and how environmental complexity and resource levels alter these relationships can be used to improve predictions for natural systems.

  20. Profiling Charge Complementarity and Selectivity for Binding at the Protein Surface

    PubMed Central

    Sulea, Traian; Purisima, Enrico O.

    2003-01-01

    A novel analysis and representation of the protein surface in terms of electrostatic binding complementarity and selectivity is presented. The charge optimization methodology is applied in a probe-based approach that simulates the binding process to the target protein. The molecular surface is color coded according to calculated optimal charge or according to charge selectivity, i.e., the binding cost of deviating from the optimal charge. The optimal charge profile depends on both the protein shape and charge distribution whereas the charge selectivity profile depends only on protein shape. High selectivity is concentrated in well-shaped concave pockets, whereas solvent-exposed convex regions are not charge selective. This suggests the synergy of charge and shape selectivity hot spots toward molecular selection and recognition, as well as the asymmetry of charge selectivity at the binding interface of biomolecular systems. The charge complementarity and selectivity profiles map relevant electrostatic properties in a readily interpretable way and encode information that is quite different from that visualized in the standard electrostatic potential map of unbound proteins. PMID:12719221

  1. I-HEDGE: determining the optimum complementary sets of taxa for conservation using evolutionary isolation

    PubMed Central

    Mooers, Arne Ø.; Caccone, Adalgisa; Russello, Michael A.

    2016-01-01

    In the midst of the current biodiversity crisis, conservation efforts might profitably be directed towards ensuring that extinctions do not result in inordinate losses of evolutionary history. Numerous methods have been developed to evaluate the importance of species based on their contribution to total phylogenetic diversity on trees and networks, but existing methods fail to take complementarity into account, and thus cannot identify the best order or subset of taxa to protect. Here, we develop a novel iterative calculation of the heightened evolutionary distinctiveness and globally endangered metric (I-HEDGE) that produces the optimal ranked list for conservation prioritization, taking into account complementarity and based on both phylogenetic diversity and extinction probability. We applied this metric to a phylogenetic network based on mitochondrial control region data from extant and recently extinct giant Galápagos tortoises, a highly endangered group of closely related species. We found that the restoration of two extinct species (a project currently underway) will contribute the greatest gain in phylogenetic diversity, and present an ordered list of rankings that is the optimum complementarity set for conservation prioritization. PMID:27635324

  2. I-HEDGE: determining the optimum complementary sets of taxa for conservation using evolutionary isolation.

    PubMed

    Jensen, Evelyn L; Mooers, Arne Ø; Caccone, Adalgisa; Russello, Michael A

    2016-01-01

    In the midst of the current biodiversity crisis, conservation efforts might profitably be directed towards ensuring that extinctions do not result in inordinate losses of evolutionary history. Numerous methods have been developed to evaluate the importance of species based on their contribution to total phylogenetic diversity on trees and networks, but existing methods fail to take complementarity into account, and thus cannot identify the best order or subset of taxa to protect. Here, we develop a novel iterative calculation of the heightened evolutionary distinctiveness and globally endangered metric (I-HEDGE) that produces the optimal ranked list for conservation prioritization, taking into account complementarity and based on both phylogenetic diversity and extinction probability. We applied this metric to a phylogenetic network based on mitochondrial control region data from extant and recently extinct giant Galápagos tortoises, a highly endangered group of closely related species. We found that the restoration of two extinct species (a project currently underway) will contribute the greatest gain in phylogenetic diversity, and present an ordered list of rankings that is the optimum complementarity set for conservation prioritization.

  3. Somatic Hypermutation-Induced Changes in the Structure and Dynamics of HIV-1 Broadly Neutralizing Antibodies.

    PubMed

    Davenport, Thaddeus M; Gorman, Jason; Joyce, M Gordon; Zhou, Tongqing; Soto, Cinque; Guttman, Miklos; Moquin, Stephanie; Yang, Yongping; Zhang, Baoshan; Doria-Rose, Nicole A; Hu, Shiu-Lok; Mascola, John R; Kwong, Peter D; Lee, Kelly K

    2016-08-02

    Antibody somatic hypermutation (SHM) and affinity maturation enhance antigen recognition by modifying antibody paratope structure to improve its complementarity with the target epitope. SHM-induced changes in paratope dynamics may also contribute to antibody maturation, but direct evidence of this is limited. Here, we examine two classes of HIV-1 broadly neutralizing antibodies (bNAbs) for SHM-induced changes in structure and dynamics, and delineate the effects of these changes on interactions with the HIV-1 envelope glycoprotein (Env). In combination with new and existing structures of unmutated and affinity matured antibody Fab fragments, we used hydrogen/deuterium exchange with mass spectrometry to directly measure Fab structural dynamics. Changes in antibody structure and dynamics were positioned to improve complementarity with Env, with changes in dynamics primarily observed at the paratope peripheries. We conclude that SHM optimizes paratope complementarity to conserved HIV-1 epitopes and restricts the mobility of paratope-peripheral residues to minimize clashes with variable features on HIV-1 Env. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Intraspecific genetic diversity and composition modify species-level diversity-productivity relationships.

    PubMed

    Schöb, Christian; Kerle, Sarah; Karley, Alison J; Morcillo, Luna; Pakeman, Robin J; Newton, Adrian C; Brooker, Rob W

    2015-01-01

    Biodiversity regulates ecosystem functions such as productivity, and experimental studies of species mixtures have revealed selection and complementarity effects driving these responses. However, the impacts of intraspecific genotypic diversity in these studies are unknown, despite it forming a substantial part of the biodiversity. In a glasshouse experiment we constructed plant communities with different levels of barley (Hordeum vulgare) genotype and weed species diversity and assessed their relative biodiversity effects through additive partitioning into selection and complementarity effects. Barley genotype diversity had weak positive effects on aboveground biomass through complementarity effects, whereas weed species diversity increased biomass predominantly through selection effects. When combined, increasing genotype diversity of barley tended to dilute the selection effect of weeds. We interpret these different effects of barley genotype and weed species diversity as the consequence of small vs large trait variation associated with intraspecific barley diversity and interspecific weed diversity, respectively. The different effects of intra- vs interspecific diversity highlight the underestimated and overlooked role of genetic diversity for ecosystem functioning. © 2014 The Authors New Phytologist © 2014 New Phytologist Trust.

  5. Dependence of the quark-lepton complementarity on parametrizations of the Cabibbo-Kobayashi-Maskawa and Pontecorvo-Maki-Nakagawa-Sakata matrices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng Yajuan

    2010-04-01

    The quark-lepton complementarity (QLC) is very suggestive in understanding possible relations between quark and lepton mixing matrices. We explore the QLC relations in all the possible angle-phase parametrizations and point out that they can approximately hold in five parametrizations. Furthermore, the vanishing of the smallest mixing angles in the Cabibbo-Kobayashi-Maskawa and Pontecorvo-Maki-Nakagawa-Sakata matrices can make sure that the QLC relations exactly hold in those five parametrizations. Finally, the sensitivity of the QLC relations to radiative corrections is also discussed.

  6. Dark-matter decay as a complementary probe of multicomponent dark sectors.

    PubMed

    Dienes, Keith R; Kumar, Jason; Thomas, Brooks; Yaylali, David

    2015-02-06

    In single-component theories of dark matter, the 2→2 amplitudes for dark-matter production, annihilation, and scattering can be related to each other through various crossing symmetries. The detection techniques based on these processes are thus complementary. However, multicomponent theories exhibit an additional direction for dark-matter complementarity: the possibility of dark-matter decay from heavier to lighter components. We discuss how this new detection channel may be correlated with the others, and demonstrate that the enhanced complementarity which emerges can be an important ingredient in probing and constraining the parameter spaces of such models.

  7. Health, Enterprise, and Labor Complementarity in the Household*

    PubMed Central

    Adhvaryu, Achyuta; Nyshadham, Anant

    2017-01-01

    We study the role of household enterprise as a coping mechanism after health shocks. Using variation in the cost of traveling to formal sector health facilities to predict recovery from acute illness in Tanzania, we show that individuals with prolonged illness switch from farm labor to enterprise activity. This response occurs along both the extensive (entry) and intensive (capital stock and labor supply) margins. Family members who are not ill exhibit exactly the same pattern of responses. Deriving a simple extension to the canonical agricultural household model, we show that our results suggest complementarities in household labor. PMID:28943705

  8. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  9. Discrete-time neural network for fast solving large linear L1 estimation problems and its application to image restoration.

    PubMed

    Xia, Youshen; Sun, Changyin; Zheng, Wei Xing

    2012-05-01

    There is growing interest in solving linear L1 estimation problems for sparsity of the solution and robustness against non-Gaussian noise. This paper proposes a discrete-time neural network which can calculate large linear L1 estimation problems fast. The proposed neural network has a fixed computational step length and is proved to be globally convergent to an optimal solution. Then, the proposed neural network is efficiently applied to image restoration. Numerical results show that the proposed neural network is not only efficient in solving degenerate problems resulting from the nonunique solutions of the linear L1 estimation problems but also needs much less computational time than the related algorithms in solving both linear L1 estimation and image restoration problems.

  10. Can Linear Superiorization Be Useful for Linear Optimization Problems?

    PubMed Central

    Censor, Yair

    2017-01-01

    Linear superiorization considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are (i) Does linear superiorization provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? and (ii) How does linear superiorization fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: “yes” and “very well”, respectively. PMID:29335660

  11. Compliant contact versus rigid contact: A comparison in the context of granular dynamics

    NASA Astrophysics Data System (ADS)

    Pazouki, Arman; Kwarta, Michał; Williams, Kyle; Likos, William; Serban, Radu; Jayakumar, Paramsothy; Negrut, Dan

    2017-10-01

    We summarize and numerically compare two approaches for modeling and simulating the dynamics of dry granular matter. The first one, the discrete-element method via penalty (DEM-P), is commonly used in the soft matter physics and geomechanics communities; it can be traced back to the work of Cundall and Strack [P. Cundall, Proc. Symp. ISRM, Nancy, France 1, 129 (1971); P. Cundall and O. Strack, Geotechnique 29, 47 (1979), 10.1680/geot.1979.29.1.47]. The second approach, the discrete-element method via complementarity (DEM-C), considers the grains perfectly rigid and enforces nonpenetration via complementarity conditions; it is commonly used in robotics and computer graphics applications and had two strong promoters in Moreau and Jean [J. J. Moreau, in Nonsmooth Mechanics and Applications, edited by J. J. Moreau and P. D. Panagiotopoulos (Springer, Berlin, 1988), pp. 1-82; J. J. Moreau and M. Jean, Proceedings of the Third Biennial Joint Conference on Engineering Systems and Analysis, Montpellier, France, 1996, pp. 201-208]. The DEM-P and DEM-C are manifestly unlike each other: They use different (i) approaches to model the frictional contact problem, (ii) sets of model parameters to capture the physics of interest, and (iii) classes of numerical methods to solve the differential equations that govern the dynamics of the granular material. Herein, we report numerical results for five experiments: shock wave propagation, cone penetration, direct shear, triaxial loading, and hopper flow, which we use to compare the DEM-P and DEM-C solutions. This exercise helps us reach two conclusions. First, both the DEM-P and DEM-C are predictive, i.e., they predict well the macroscale emergent behavior by capturing the dynamics at the microscale. Second, there are classes of problems for which one of the methods has an advantage. Unlike the DEM-P, the DEM-C cannot capture shock-wave propagation through granular media. However, the DEM-C is proficient at handling arbitrary grain geometries and solves, at large integration step sizes, smaller problems, i.e., containing thousands of elements, very effectively. The DEM-P vs DEM-C comparison is carried out using a public-domain, open-source software package; the models used are available online.

  12. Compliant contact versus rigid contact: A comparison in the context of granular dynamics.

    PubMed

    Pazouki, Arman; Kwarta, Michał; Williams, Kyle; Likos, William; Serban, Radu; Jayakumar, Paramsothy; Negrut, Dan

    2017-10-01

    We summarize and numerically compare two approaches for modeling and simulating the dynamics of dry granular matter. The first one, the discrete-element method via penalty (DEM-P), is commonly used in the soft matter physics and geomechanics communities; it can be traced back to the work of Cundall and Strack [P. Cundall, Proc. Symp. ISRM, Nancy, France 1, 129 (1971); P. Cundall and O. Strack, Geotechnique 29, 47 (1979)GTNQA80016-850510.1680/geot.1979.29.1.47]. The second approach, the discrete-element method via complementarity (DEM-C), considers the grains perfectly rigid and enforces nonpenetration via complementarity conditions; it is commonly used in robotics and computer graphics applications and had two strong promoters in Moreau and Jean [J. J. Moreau, in Nonsmooth Mechanics and Applications, edited by J. J. Moreau and P. D. Panagiotopoulos (Springer, Berlin, 1988), pp. 1-82; J. J. Moreau and M. Jean, Proceedings of the Third Biennial Joint Conference on Engineering Systems and Analysis, Montpellier, France, 1996, pp. 201-208]. The DEM-P and DEM-C are manifestly unlike each other: They use different (i) approaches to model the frictional contact problem, (ii) sets of model parameters to capture the physics of interest, and (iii) classes of numerical methods to solve the differential equations that govern the dynamics of the granular material. Herein, we report numerical results for five experiments: shock wave propagation, cone penetration, direct shear, triaxial loading, and hopper flow, which we use to compare the DEM-P and DEM-C solutions. This exercise helps us reach two conclusions. First, both the DEM-P and DEM-C are predictive, i.e., they predict well the macroscale emergent behavior by capturing the dynamics at the microscale. Second, there are classes of problems for which one of the methods has an advantage. Unlike the DEM-P, the DEM-C cannot capture shock-wave propagation through granular media. However, the DEM-C is proficient at handling arbitrary grain geometries and solves, at large integration step sizes, smaller problems, i.e., containing thousands of elements, very effectively. The DEM-P vs DEM-C comparison is carried out using a public-domain, open-source software package; the models used are available online.

  13. Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel

    ERIC Educational Resources Information Center

    El-Gebeily, M.; Yushau, B.

    2008-01-01

    In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…

  14. Non-linear analysis of wave progagation using transform methods and plates and shells using integral equations

    NASA Astrophysics Data System (ADS)

    Pipkins, Daniel Scott

    Two diverse topics of relevance in modern computational mechanics are treated. The first involves the modeling of linear and non-linear wave propagation in flexible, lattice structures. The technique used combines the Laplace Transform with the Finite Element Method (FEM). The procedure is to transform the governing differential equations and boundary conditions into the transform domain where the FEM formulation is carried out. For linear problems, the transformed differential equations can be solved exactly, hence the method is exact. As a result, each member of the lattice structure is modeled using only one element. In the non-linear problem, the method is no longer exact. The approximation introduced is a spatial discretization of the transformed non-linear terms. The non-linear terms are represented in the transform domain by making use of the complex convolution theorem. A weak formulation of the resulting transformed non-linear equations yields a set of element level matrix equations. The trial and test functions used in the weak formulation correspond to the exact solution of the linear part of the transformed governing differential equation. Numerical results are presented for both linear and non-linear systems. The linear systems modeled are longitudinal and torsional rods and Bernoulli-Euler and Timoshenko beams. For non-linear systems, a viscoelastic rod and Von Karman type beam are modeled. The second topic is the analysis of plates and shallow shells under-going finite deflections by the Field/Boundary Element Method. Numerical results are presented for two plate problems. The first is the bifurcation problem associated with a square plate having free boundaries which is loaded by four, self equilibrating corner forces. The results are compared to two existing numerical solutions of the problem which differ substantially.

  15. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  16. On the linear programming bound for linear Lee codes.

    PubMed

    Astola, Helena; Tabus, Ioan

    2016-01-01

    Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.

  17. Forecasting in the presence of expectations

    NASA Astrophysics Data System (ADS)

    Allen, R.; Zivin, J. G.; Shrader, J.

    2016-05-01

    Physical processes routinely influence economic outcomes, and actions by economic agents can, in turn, influence physical processes. This feedback creates challenges for forecasting and inference, creating the potential for complementarity between models from different academic disciplines. Using the example of prediction of water availability during a drought, we illustrate the potential biases in forecasts that only take part of a coupled system into account. In particular, we show that forecasts can alter the feedbacks between supply and demand, leading to inaccurate prediction about future states of the system. Although the example is specific to drought, the problem of feedback between expectations and forecast quality is not isolated to the particular model-it is relevant to areas as diverse as population assessments for conservation, balancing the electrical grid, and setting macroeconomic policy.

  18. Fundamental solution of the problem of linear programming and method of its determination

    NASA Technical Reports Server (NTRS)

    Petrunin, S. V.

    1978-01-01

    The idea of a fundamental solution to a problem in linear programming is introduced. A method of determining the fundamental solution and of applying this method to the solution of a problem in linear programming is proposed. Numerical examples are cited.

  19. HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems

    PubMed Central

    Tuo, Shouheng; Yong, Longquan; Deng, Fang’an; Li, Yanhai; Lin, Yong; Lu, Qiuju

    2017-01-01

    Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application. PMID:28403224

  20. HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems.

    PubMed

    Tuo, Shouheng; Yong, Longquan; Deng, Fang'an; Li, Yanhai; Lin, Yong; Lu, Qiuju

    2017-01-01

    Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.

  1. Reinforcement learning in complementarity game and population dynamics

    NASA Astrophysics Data System (ADS)

    Jost, Jürgen; Li, Wei

    2014-02-01

    We systematically test and compare different reinforcement learning schemes in a complementarity game [J. Jost and W. Li, Physica A 345, 245 (2005), 10.1016/j.physa.2004.07.005] played between members of two populations. More precisely, we study the Roth-Erev, Bush-Mosteller, and SoftMax reinforcement learning schemes. A modified version of Roth-Erev with a power exponent of 1.5, as opposed to 1 in the standard version, performs best. We also compare these reinforcement learning strategies with evolutionary schemes. This gives insight into aspects like the issue of quick adaptation as opposed to systematic exploration or the role of learning rates.

  2. Structure—activity relationships for insecticidal carbamates*

    PubMed Central

    Metcalf, Robert L.

    1971-01-01

    Carbamate insecticides are biologically active because of their structural complementarity to the active site of acetylcholinesterase (AChE) and their consequent action as substrates with very low turnover numbers. Carbamates behave as synthetic neurohormones that produce their toxic action by interrupting the normal action of AChE so that acetylcholine accumulates at synaptic junctions. The necessary properties for a suitable insecticidal carbamate are lipid solubility, suitable structural complementarity to AChE, and sufficient stability to multifunction-oxidase detoxification. The relationships between the structure and the activity of a large number of synthetic carbamates are analysed in detail, with particular attention to the second of these properties. PMID:5315358

  3. Open-quantum-systems approach to complementarity in neutral-kaon interferometry

    NASA Astrophysics Data System (ADS)

    de Souza, Gustavo; de Oliveira, J. G. G.; Varizi, Adalberto D.; Nogueira, Edson C.; Sampaio, Marcos D.

    2016-12-01

    In bipartite quantum systems, entanglement correlations between the parties exerts direct influence in the phenomenon of wave-particle duality. This effect has been quantitatively analyzed in the context of two qubits by Jakob and Bergou [Opt. Commun. 283, 827 (2010), 10.1016/j.optcom.2009.10.044]. Employing a description of the K -meson propagation in free space where its weak decay states are included as a second party, we study here this effect in the kaon-antikaon oscillations. We show that a new quantitative "triality" relation holds, similar to the one considered by Jakob and Bergou. In our case, it relates the distinguishability between the decay-product states corresponding to the distinct kaon propagation modes KS, KL, the amount of wave-like path interference between these states, and the amount of entanglement given by the reduced von Neumann entropy. The inequality can account for the complementarity between strangeness oscillations and lifetime information previously considered in the literature, therefore allowing one to see how it is affected by entanglement correlations. As we will discuss, it allows one to visualize clearly through the K0-K ¯0 oscillations the fundamental role of entanglement in quantum complementarity.

  4. Plant genotypic diversity reduces the rate of consumer resource utilization

    PubMed Central

    McArt, Scott H.; Thaler, Jennifer S.

    2013-01-01

    While plant species diversity can reduce herbivore densities and herbivory, little is known regarding how plant genotypic diversity alters resource utilization by herbivores. Here, we show that an invasive folivore—the Japanese beetle (Popillia japonica)—increases 28 per cent in abundance, but consumes 24 per cent less foliage in genotypic polycultures compared with monocultures of the common evening primrose (Oenothera biennis). We found strong complementarity for reduced herbivore damage among plant genotypes growing in polycultures and a weak dominance effect of particularly resistant genotypes. Sequential feeding by P. japonica on different genotypes from polycultures resulted in reduced consumption compared with feeding on different plants of the same genotype from monocultures. Thus, diet mixing among plant genotypes reduced herbivore consumption efficiency. Despite positive complementarity driving an increase in fruit production in polycultures, we observed a trade-off between complementarity for increased plant productivity and resistance to herbivory, suggesting costs in the complementary use of resources by plant genotypes may manifest across trophic levels. These results elucidate mechanisms for how plant genotypic diversity simultaneously alters resource utilization by both producers and consumers, and show that population genotypic diversity can increase the resistance of a native plant to an invasive herbivore. PMID:23658201

  5. Plant genotypic diversity reduces the rate of consumer resource utilization.

    PubMed

    McArt, Scott H; Thaler, Jennifer S

    2013-07-07

    While plant species diversity can reduce herbivore densities and herbivory, little is known regarding how plant genotypic diversity alters resource utilization by herbivores. Here, we show that an invasive folivore--the Japanese beetle (Popillia japonica)--increases 28 per cent in abundance, but consumes 24 per cent less foliage in genotypic polycultures compared with monocultures of the common evening primrose (Oenothera biennis). We found strong complementarity for reduced herbivore damage among plant genotypes growing in polycultures and a weak dominance effect of particularly resistant genotypes. Sequential feeding by P. japonica on different genotypes from polycultures resulted in reduced consumption compared with feeding on different plants of the same genotype from monocultures. Thus, diet mixing among plant genotypes reduced herbivore consumption efficiency. Despite positive complementarity driving an increase in fruit production in polycultures, we observed a trade-off between complementarity for increased plant productivity and resistance to herbivory, suggesting costs in the complementary use of resources by plant genotypes may manifest across trophic levels. These results elucidate mechanisms for how plant genotypic diversity simultaneously alters resource utilization by both producers and consumers, and show that population genotypic diversity can increase the resistance of a native plant to an invasive herbivore.

  6. Climate conditions of the “El Niño” phenomenon for a hydro-eolic complementarity project in peru

    NASA Astrophysics Data System (ADS)

    Castillo N, Leonardo; Ortega M, Arturo; Luyo, Jaime E.

    2018-05-01

    Northern Peru is threatened by the consequences of a natural phenomenon called “El Niño”, mainly during the months of December to April. In the summer of 2017, this event reported strong climatic variations with intense rains, increasing the water levels of the Chira and Piura rivers, filling the Poechos reservoir, together with flooding and mudding. However, from an energetic perspective, these climatic alterations have a strong potential to increase the availability of the wind and hydro renewable energies in northern Peru. This work performs an evaluation of the hydro-eolic complementarity as part of the sustainability of energy systems. The study includes evaluation of historical records of wind velocity and water flow rates. It then evaluates correlation, analysis, and estimates the hydro and wind energy potentials generated by this phenomenon. The implications of the "El Niño" phenomenon are mostly negative. Nonetheless, it is possible to take advantage of higher wind and water flow rates with a hybrid energy system. The results obtained show a high degree of complementarity both normal and "El Niño" phenomenon condition in northern Peru.

  7. Can linear superiorization be useful for linear optimization problems?

    NASA Astrophysics Data System (ADS)

    Censor, Yair

    2017-04-01

    Linear superiorization (LinSup) considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are: (i) does LinSup provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? (ii) How does LinSup fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: ‘yes’ and ‘very well’, respectively.

  8. Portfolio optimization using fuzzy linear programming

    NASA Astrophysics Data System (ADS)

    Pandit, Purnima K.

    2013-09-01

    Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.

  9. A linear programming manual

    NASA Technical Reports Server (NTRS)

    Tuey, R. C.

    1972-01-01

    Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.

  10. Linear quadratic tracking problems in Hilbert space - Application to optimal active noise suppression

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.

    1989-01-01

    A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.

  11. Lorentz Invariance Violation: the Latest Fermi Results and the GRB-AGN Complementarity

    NASA Technical Reports Server (NTRS)

    Bolmont, J.; Vasileiou, V.; Jacholkowska, A.; Piron, F.; Couturier, C.; Granot, J.; Stecker, F. W.; Cohen-Tanugi, J.; Longo, F.

    2013-01-01

    Because they are bright and distant, Gamma-ray Bursts (GRBs) have been used for more than a decade to test propagation of photons and to constrain relevant Quantum Gravity (QG) models in which the velocity of photons in vacuum can depend on their energy. With its unprecedented sensitivity and energy coverage, the Fermi satellite has provided the most constraining results on the QG energy scale so far. In this talk, the latest results obtained from the analysis of four bright GRBs observed by the Large Area Telescope will be reviewed. These robust results, cross-checked using three different analysis techniques set the limit on QG energy scale at E(sub QG,1) greater than 7.6 times the Planck energy for linear dispersion and E(sub QG,2) greater than 1.3 x 10(exp 11) gigaelectron volts for quadratic dispersion (95% CL). After describing the data and the analysis techniques in use, results will be discussed and confronted to latest constraints obtained with Active Galactic Nuclei.

  12. The median problems on linear multichromosomal genomes: graph representation and fast exact solutions.

    PubMed

    Xu, Andrew Wei

    2010-09-01

    In genome rearrangement, given a set of genomes G and a distance measure d, the median problem asks for another genome q that minimizes the total distance [Formula: see text]. This is a key problem in genome rearrangement based phylogenetic analysis. Although this problem is known to be NP-hard, we have shown in a previous article, on circular genomes and under the DCJ distance measure, that a family of patterns in the given genomes--represented by adequate subgraphs--allow us to rapidly find exact solutions to the median problem in a decomposition approach. In this article, we extend this result to the case of linear multichromosomal genomes, in order to solve more interesting problems on eukaryotic nuclear genomes. A multi-way capping problem in the linear multichromosomal case imposes an extra computational challenge on top of the difficulty in the circular case, and this difficulty has been underestimated in our previous study and is addressed in this article. We represent the median problem by the capped multiple breakpoint graph, extend the adequate subgraphs into the capped adequate subgraphs, and prove optimality-preserving decomposition theorems, which give us the tools to solve the median problem and the multi-way capping optimization problem together. We also develop an exact algorithm ASMedian-linear, which iteratively detects instances of (capped) adequate subgraphs and decomposes problems into subproblems. Tested on simulated data, ASMedian-linear can rapidly solve most problems with up to several thousand genes, and it also can provide optimal or near-optimal solutions to the median problem under the reversal/HP distance measures. ASMedian-linear is available at http://sites.google.com/site/andrewweixu .

  13. Prospective Middle School Mathematics Teachers' Knowledge of Linear Graphs in Context of Problem-Posing

    ERIC Educational Resources Information Center

    Kar, Tugrul

    2016-01-01

    This study examined prospective middle school mathematics teachers' problem-posing skills by investigating their ability to associate linear graphs with daily life situations. Prospective teachers were given linear graphs and asked to pose problems that could potentially be represented by the graphs. Their answers were analyzed in two stages. In…

  14. Analyzing Multilevel Data: An Empirical Comparison of Parameter Estimates of Hierarchical Linear Modeling and Ordinary Least Squares Regression

    ERIC Educational Resources Information Center

    Rocconi, Louis M.

    2011-01-01

    Hierarchical linear models (HLM) solve the problems associated with the unit of analysis problem such as misestimated standard errors, heterogeneity of regression and aggregation bias by modeling all levels of interest simultaneously. Hierarchical linear modeling resolves the problem of misestimated standard errors by incorporating a unique random…

  15. Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre

    2014-07-01

    We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.

  16. Oxidation in the complementarity-determining regions differentially influences the properties of therapeutic antibodies

    PubMed Central

    Dashivets, Tetyana; Stracke, Jan; Dengl, Stefan; Knaupp, Alexander; Pollmann, Jan; Buchner, Johannes; Schlothauer, Tilman

    2016-01-01

    ABSTRACT Therapeutic antibodies can undergo a variety of chemical modification reactions in vitro. Depending on the site of modification, either antigen binding or Fc-mediated functions can be affected. Oxidation of tryptophan residues is one of the post-translational modifications leading to altered antibody functionality. In this study, we examined the structural and functional properties of a therapeutic antibody construct and 2 affinity matured variants thereof. Two of the 3 antibodies carry an oxidation-prone tryptophan residue in the complementarity-determining region of the VL domain. We demonstrate the differences in the stability and bioactivity of the 3 antibodies, and reveal differential degradation pathways for the antibodies susceptible to oxidation. PMID:27612038

  17. Linear solver performance in elastoplastic problem solution on GPU cluster

    NASA Astrophysics Data System (ADS)

    Khalevitsky, Yu. V.; Konovalov, A. V.; Burmasheva, N. V.; Partin, A. S.

    2017-12-01

    Applying the finite element method to severe plastic deformation problems involves solving linear equation systems. While the solution procedure is relatively hard to parallelize and computationally intensive by itself, a long series of large scale systems need to be solved for each problem. When dealing with fine computational meshes, such as in the simulations of three-dimensional metal matrix composite microvolume deformation, tens and hundreds of hours may be needed to complete the whole solution procedure, even using modern supercomputers. In general, one of the preconditioned Krylov subspace methods is used in a linear solver for such problems. The method convergence highly depends on the operator spectrum of a problem stiffness matrix. In order to choose the appropriate method, a series of computational experiments is used. Different methods may be preferable for different computational systems for the same problem. In this paper we present experimental data obtained by solving linear equation systems from an elastoplastic problem on a GPU cluster. The data can be used to substantiate the choice of the appropriate method for a linear solver to use in severe plastic deformation simulations.

  18. Science as theater, theater as science

    NASA Astrophysics Data System (ADS)

    Lustig, Harry

    2002-04-01

    Beginning with Bertold Brecht's "Galileo" in 1942 and Friedrich Dürrenmatt's "The Physicists" in 1962, physics and other sciences have served a number of dramatists as backdrops for the exposition of existential problems, as well as the provision of entertainment. Michael Frayn's 1998 play "Copenhagen" broke new ground by giving a central role to the presentation of scientific substance and ideas and to the examination of recent controversial and emotionally charged events in the history of science and of the "real world". A rash of "science plays" erupted. How should we physicists react to this development? Surely, it can be argued, any exposure of science to the public is better than none and will help break down the barriers between the "two cultures". But what if the science or the scientists are badly misrepresented or the play is a weapon to strip science of its legitimacy and its claims to reality and truth? After reviewing a half dozen of the new plays, I conclude that "Copenhagen", though flawed, is not only the best of show, but a positive, even admirable endeavor. The contributions of Bohr, Heisenberg, Born, Schrödinger, and other scientists and their interactions in the golden years of the creation of quantum mechanics are accurately and thrillingly rendered. There may be no better non-technical exposition of complementarity and the uncertainty principle than the one that Frayn puts into the mouths of Bohr and Heisenberg. The treatment of the history of the atomic bomb and Heisenberg's role in Germany's failure to achieve a bomb is another matter. Frayn can also be criticized for applying uncertainly and complementarity to the macroscopic world and, in particular, to human interactions, thereby giving some aid and comfort to the post-modernists. These reservations aside, Copenhagen is a beautiful contribution to the appreciation of science.

  19. Prediction of Host-Derived miRNAs with the Potential to Target PVY in Potato Plants

    PubMed Central

    Iqbal, Muhammad S.; Hafeez, Muhammad N.; Wattoo, Javed I.; Ali, Arfan; Sharif, Muhammad N.; Rashid, Bushra; Tabassum, Bushra; Nasir, Idrees A.

    2016-01-01

    Potato virus Y has emerged as a threatening problem in all potato growing areas around the globe. PVY reduces the yield and quality of potato cultivars. During the last 30 years, significant genetic changes in PVY strains have been observed with an increased incidence associated with crop damage. In the current study, computational approaches were applied to predict Potato derived miRNA targets in the PVY genome. The PVY genome is approximately 9 thousand nucleotides, which transcribes the following 6 genes:CI, NIa, NIb-Pro, HC-Pro, CP, and VPg. A total of 343 mature miRNAs were retrieved from the miRBase database and were examined for their target sequences in PVY genes using the minimum free energy (mfe), minimum folding energy, sequence complementarity and mRNA-miRNA hybridization approaches. The identified potato miRNAs against viral mRNA targets have antiviral activities, leading to translational inhibition by mRNA cleavage and/or mRNA blockage. We found 86 miRNAs targeting the PVY genome at 151 different sites. Moreover, only 36 miRNAs potentially targeted the PVY genome at 101 loci. The CI gene of the PVY genome was targeted by 32 miRNAs followed by the complementarity of 26, 19, 18, 16, and 13 miRNAs. Most importantly, we found 5 miRNAs (miR160a-5p, miR7997b, miR166c-3p, miR399h, and miR5303d) that could target the CI, NIa, NIb-Pro, HC-Pro, CP, and VPg genes of PVY. The predicted miRNAs can be used for the development of PVY-resistant potato crops in the future. PMID:27683585

  20. Parameterized LMI Based Diagonal Dominance Compensator Study for Polynomial Linear Parameter Varying System

    NASA Astrophysics Data System (ADS)

    Han, Xiaobao; Li, Huacong; Jia, Qiusheng

    2017-12-01

    For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.

  1. Evaluation of PET texture features with heterogeneous phantoms: complementarity and effect of motion and segmentation method

    NASA Astrophysics Data System (ADS)

    Carles, M.; Torres-Espallardo, I.; Alberich-Bayarri, A.; Olivas, C.; Bello, P.; Nestle, U.; Martí-Bonmatí, L.

    2017-01-01

    A major source of error in quantitative PET/CT scans of lung cancer tumors is respiratory motion. Regarding the variability of PET texture features (TF), the impact of respiratory motion has not been properly studied with experimental phantoms. The primary aim of this work was to evaluate the current use of PET texture analysis for heterogeneity characterization in lesions affected by respiratory motion. Twenty-eight heterogeneous lesions were simulated by a mixture of alginate and 18 F-fluoro-2-deoxy-D-glucose (FDG). Sixteen respiratory patterns were applied. Firstly, the TF response for different heterogeneous phantoms and its robustness with respect to the segmentation method were calculated. Secondly, the variability for TF derived from PET image with (gated, G-) and without (ungated, U-) motion compensation was analyzed. Finally, TF complementarity was assessed. In the comparison of TF derived from the ideal contour with respect to TF derived from 40%-threshold and adaptive-threshold PET contours, 7/8 TF showed strong linear correlation (LC) (p  <  0.001, r  >  0.75), despite a significant volume underestimation. Independence of lesion movement (LC in 100% of the combined pairs of movements, p  <  0.05) was obtained for 1/8 TF with U-image (width of the volume-activity histogram, WH) and 4/8 TF with G-image (WH and energy (ENG), local-homogeneity (LH) and entropy (ENT), derived from the co-ocurrence matrix). Their variability in terms of the coefficient of variance ({{C}\\text{V}} ) resulted in {{C}\\text{V}} (WH)  =  0.18 on the U-image and {{C}\\text{V}} (WH)  =  0.24, {{C}\\text{V}} (ENG)  =  0.15, {{C}\\text{V}} (LH)  =  0.07 and {{C}\\text{V}} (ENT)  =  0.06 on the G-image. Apart from WH (r  >  0.9, p  <  0.001), not one of these TF has shown LC with C max. Complementarity was observed for the TF pairs: ENG-LH, CONT (contrast)-ENT and LH-ENT. In conclusion, the effect of respiratory motion should be taken into account when the heterogeneity of lung cancer is quantified on PET/CT images. Despite inaccurate volume delineation, TF derived from 40% and COA contours could be reliable for their prognostic use. The TF that exhibited simultaneous added value and independence of lesion movement were ENG and ENT computed from the G-image. Their use is therefore recommended for heterogeneity quantification of lesions affected by respiratory motion.

  2. Students’ difficulties in solving linear equation problems

    NASA Astrophysics Data System (ADS)

    Wati, S.; Fitriana, L.; Mardiyana

    2018-03-01

    A linear equation is an algebra material that exists in junior high school to university. It is a very important material for students in order to learn more advanced mathematics topics. Therefore, linear equation material is essential to be mastered. However, the result of 2016 national examination in Indonesia showed that students’ achievement in solving linear equation problem was low. This fact became a background to investigate students’ difficulties in solving linear equation problems. This study used qualitative descriptive method. An individual written test on linear equation tasks was administered, followed by interviews. Twenty-one sample students of grade VIII of SMPIT Insan Kamil Karanganyar did the written test, and 6 of them were interviewed afterward. The result showed that students with high mathematics achievement donot have difficulties, students with medium mathematics achievement have factual difficulties, and students with low mathematics achievement have factual, conceptual, operational, and principle difficulties. Based on the result there is a need of meaningfulness teaching strategy to help students to overcome difficulties in solving linear equation problems.

  3. Assessing non-uniqueness: An algebraic approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasco, Don W.

    Geophysical inverse problems are endowed with a rich mathematical structure. When discretized, most differential and integral equations of interest are algebraic (polynomial) in form. Techniques from algebraic geometry and computational algebra provide a means to address questions of existence and uniqueness for both linear and non-linear inverse problem. In a sense, the methods extend ideas which have proven fruitful in treating linear inverse problems.

  4. ALPS: A Linear Program Solver

    NASA Technical Reports Server (NTRS)

    Ferencz, Donald C.; Viterna, Larry A.

    1991-01-01

    ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.

  5. What is complementarity?: Niels Bohr and the architecture of quantum theory

    NASA Astrophysics Data System (ADS)

    Plotnitsky, Arkady

    2014-12-01

    This article explores Bohr’s argument, advanced under the heading of ‘complementarity,’ concerning quantum phenomena and quantum mechanics, and its physical and philosophical implications. In Bohr, the term complementarity designates both a particular concept and an overall interpretation of quantum phenomena and quantum mechanics, in part grounded in this concept. While the argument of this article is primarily philosophical, it will also address, historically, the development and transformations of Bohr’s thinking, under the impact of the development of quantum theory and Bohr’s confrontation with Einstein, especially their exchange concerning the EPR experiment, proposed by Einstein, Podolsky and Rosen in 1935. Bohr’s interpretation was progressively characterized by a more radical epistemology, in its ultimate form, which was developed in the 1930s and with which I shall be especially concerned here, defined by his new concepts of phenomenon and atomicity. According to this epistemology, quantum objects are seen as indescribable and possibly even as inconceivable, and as manifesting their existence only in the effects of their interactions with measuring instruments upon those instruments, effects that define phenomena in Bohr’s sense. The absence of causality is an automatic consequence of this epistemology. I shall also consider how probability and statistics work under these epistemological conditions.

  6. Indivisibility, Complementarity and Ontology: A Bohrian Interpretation of Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Roldán-Charria, Jairo

    2014-12-01

    The interpretation of quantum mechanics presented in this paper is inspired by two ideas that are fundamental in Bohr's writings: indivisibility and complementarity. Further basic assumptions of the proposed interpretation are completeness, universality and conceptual economy. In the interpretation, decoherence plays a fundamental role for the understanding of measurement. A general and precise conception of complementarity is proposed. It is fundamental in this interpretation to make a distinction between ontological reality, constituted by everything that does not depend at all on the collectivity of human beings, nor on their decisions or limitations, nor on their existence, and empirical reality constituted by everything that not being ontological is, however, intersubjective. According to the proposed interpretation, neither the dynamical properties, nor the constitutive properties of microsystems like mass, charge and spin, are ontological. The properties of macroscopic systems and space-time are also considered to belong to empirical reality. The acceptance of the above mentioned conclusion does not imply a total rejection of the notion of ontological reality. In the paper, utilizing the Aristotelian ideas of general cause and potentiality, a relation between ontological reality and empirical reality is proposed. Some glimpses of ontological reality, in the form of what can be said about it, are finally presented.

  7. Emergence of complementarity and the Baconian roots of Niels Bohr's method

    NASA Astrophysics Data System (ADS)

    Perovic, Slobodan

    2013-08-01

    I argue that instead of a rather narrow focus on N. Bohr's account of complementarity as a particular and perhaps obscure metaphysical or epistemological concept (or as being motivated by such a concept), we should consider it to result from pursuing a particular method of studying physical phenomena. More precisely, I identify a strong undercurrent of Baconian method of induction in Bohr's work that likely emerged during his experimental training and practice. When its development is analyzed in light of Baconian induction, complementarity emerges as a levelheaded rather than a controversial account, carefully elicited from a comprehensive grasp of the available experimental basis, shunning hasty metaphysically motivated generalizations based on partial experimental evidence. In fact, Bohr's insistence on the "classical" nature of observations in experiments, as well as the counterintuitive synthesis of wave and particle concepts that have puzzled scholars, seem a natural outcome (an updated instance) of the inductive method. Such analysis clarifies the intricacies of early Schrödinger's critique of the account as well as Bohr's response, which have been misinterpreted in the literature. If adequate, the analysis may lend considerable support to the view that Bacon explicated the general terms of an experimentally minded strand of the scientific method, developed and refined by scientists in the following three centuries.

  8. Target mimicry provides a new mechanism for regulation of microRNA activity.

    PubMed

    Franco-Zorrilla, José Manuel; Valli, Adrián; Todesco, Marco; Mateos, Isabel; Puga, María Isabel; Rubio-Somoza, Ignacio; Leyva, Antonio; Weigel, Detlef; García, Juan Antonio; Paz-Ares, Javier

    2007-08-01

    MicroRNAs (miRNA) regulate key aspects of development and physiology in animals and plants. These regulatory RNAs act as guides of effector complexes to recognize specific mRNA sequences based on sequence complementarity, resulting in translational repression or site-specific cleavage. In plants, most miRNA targets are cleaved and show almost perfect complementarity with the miRNAs around the cleavage site. Here, we examined the non-protein coding gene IPS1 (INDUCED BY PHOSPHATE STARVATION 1) from Arabidopsis thaliana. IPS1 contains a motif with sequence complementarity to the phosphate (Pi) starvation-induced miRNA miR-399, but the pairing is interrupted by a mismatched loop at the expected miRNA cleavage site. We show that IPS1 RNA is not cleaved but instead sequesters miR-399. Thus, IPS1 overexpression results in increased accumulation of the miR-399 target PHO2 mRNA and, concomitantly, in reduced shoot Pi content. Engineering of IPS1 to be cleavable abolishes its inhibitory activity on miR-399. We coin the term 'target mimicry' to define this mechanism of inhibition of miRNA activity. Target mimicry can be generalized beyond the control of Pi homeostasis, as demonstrated using artificial target mimics.

  9. The generalized pole assignment problem. [dynamic output feedback problems

    NASA Technical Reports Server (NTRS)

    Djaferis, T. E.; Mitter, S. K.

    1979-01-01

    Two dynamic output feedback problems for a linear, strictly proper system are considered, along with their interrelationships. The problems are formulated in the frequency domain and investigated in terms of linear equations over rings of polynomials. Necessary and sufficient conditions are expressed using genericity.

  10. Detecting the role of individual species for overyielding in experimental grassland communities composed of potentially dominant species.

    PubMed

    Roscher, Christiane; Schumacher, Jens; Weisser, Wolfgang W; Schmid, Bernhard; Schulze, Ernst-Detlef

    2007-12-01

    Several studies have shown that the contribution of individual species to the positive relationship between species richness and community biomass production cannot be easily predicted from species monocultures. Here, we used a biodiversity experiment with a pool of nine potentially dominant grassland species to relate the species richness-productivity relationship to responses in density, size and aboveground allocation patterns of individual species. Aboveground community biomass increased strongly with the transition from monocultures to two-species mixtures but only slightly with the transition from two- to nine-species mixtures. Tripartite partitioning showed that the strong increase shown by the former was due to trait-independent complementarity effects, while the slight increase shown by the latter was due to dominance effects. Trait-dependent complementarity effects depended on species composition. Relative yield total (RYT) was greater than 1 (RYT>1) in mixtures but did not increase with species richness, which is consistent with the constant complementarity effect. The relative yield (RY) of only one species, Arrhenatherum elatius, continually increased with species richness, while those of the other species studied decreased with species richness or varied among different species compositions within richness levels. High observed/expected RYs (RYo/RYe>1) of individual species were mainly due to increased module densities, whereas low observed/expected RYs (RYo/RYe<1) were due to more pronounced decreases in module density (species with stoloniferous or creeping growth) or module size (species with clearly-defined plant individuals). The trade-off between module density and size, typical for plant populations under the law of constant final yield, was compensated among species. The positive trait-independent complementarity effect could be explained by an increase in community module density, which reached a maximum at low species richness. In contrast, the increasing dominance effect was attributable to the species-specific ability, in particular that of A. elatius, to increase module size, while intrinsic growth limitations led to a suppression of the remaining species in many mixtures.

  11. Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)

    NASA Astrophysics Data System (ADS)

    Dubinskii, Yu A.; Osipenko, A. S.

    2000-02-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.

  12. Vegetative and Atmospheric Controls on the Bouchet-Morton Complementary Relationship Hypothesis

    NASA Astrophysics Data System (ADS)

    Pettijohn, J. C.; Salvucci, G. D.; Phillips, N. G.; Daley, M. J.

    2006-12-01

    The Bouchet-Morton Complementary Relationship (CR) hypothesis is a potentially-powerful analytic tool to help understand the feedback between evapotranspiring land surfaces and the atmospheric boundary layer (ABL), and how potential evaporation reflects this coupling on multiple time and length scales. In spite of advances in our ability to measure and model these processes, the heuristic CR hypothesis remains an unsolved, first-order problem. The leading theoretical models, i.e., Morton, Granger, and Szilagyi, of the coupled land surface atmosphere mechanisms responsible for CR focus primarily on vertical humidity (vapor pressure) profiles while assuming that vegetative and/or atmospheric diffusivities play an insignificant role in regulating CR. Further, whereas Granger and Szilagyi assume almost opposite vertical temperature profile boundary conditions, both derivations appear to validate CR. Contrary to these multiple working hypotheses' assumptions, our recent CR evaluation of 147 days (1987-1989) at the FIFE temperate grassland discovered that canopy conductance was an essential forcing variable in complementarity, and thus improved CR in application when included in the definition of potential evaporation. To isolate the exact forcing mechanisms of canopy and ABL conductances to complementarity, we evaluated CR in a mixed-deciduous forest at Harvard Forest (summers 2005-2006) by comparing daily averaged water-stressed (non-irrigated, regionally stressed soil moisture) and water-unstressed (irrigated, `potential') transpiration. Root-zone soil moisture of a red maple (Acer rubrum L.) sample set was elevated using a pulse-irrigation system. Whole-tree transpiration of the `potential` (water-unstressed) and a reference (water-stressed) set of maples was monitored at high frequency using heat-dissipation Granier-type sap flux sensors. To isolate physiological and/or atmospheric forcing of CR, we estimated isothermal Penman-Monteith transpiration models of both irrigated and non-irrigated time series using a Jarvis type multiplicative stress model of scaled canopy conductance to water vapor transport. Poorly-constrained model parameters (e.g., environmental stress boundary conditions) were estimated using a grid search routine; further, parameter confidence limits were inferred using bootstrap replacement sampling. Preliminary results suggest the following: (1) the absence of an unstressed canopy conductance in the Penman equation results in violation of fundamental CR assumptions (similar to FIFE); and (2) unlimited root-zone water availability does not reduce the leaf-level stomatal resistance enough to yield complementarity, i.e., the typical CR potential signal is also a function of other environmental stresses, e.g., vapor pressure deficit. In summary, our results yield valuable insight into the role of vertical atmospheric and vegetative conductances in CR.

  13. A computational algorithm for spacecraft control and momentum management

    NASA Technical Reports Server (NTRS)

    Dzielski, John; Bergmann, Edward; Paradiso, Joseph

    1990-01-01

    Developments in the area of nonlinear control theory have shown how coordinate changes in the state and input spaces of a dynamical system can be used to transform certain nonlinear differential equations into equivalent linear equations. These techniques are applied to the control of a spacecraft equipped with momentum exchange devices. An optimal control problem is formulated that incorporates a nonlinear spacecraft model. An algorithm is developed for solving the optimization problem using feedback linearization to transform to an equivalent problem involving a linear dynamical constraint and a functional approximation technique to solve for the linear dynamics in terms of the control. The original problem is transformed into an unconstrained nonlinear quadratic program that yields an approximate solution to the original problem. Two examples are presented to illustrate the results.

  14. Solving the Problem of Linear Viscoelasticity for Piecewise-Homogeneous Anisotropic Plates

    NASA Astrophysics Data System (ADS)

    Kaloerov, S. A.; Koshkin, A. A.

    2017-11-01

    An approximate method for solving the problem of linear viscoelasticity for thin anisotropic plates subject to transverse bending is proposed. The method of small parameter is used to reduce the problem to a sequence of boundary problems of applied theory of bending of plates solved using complex potentials. The general form of complex potentials in approximations and the boundary conditions for determining them are obtained. Problems for a plate with elliptic elastic inclusions are solved as an example. The numerical results for a plate with one, two elliptical (circular), and linear inclusions are analyzed.

  15. A duality approach for solving bounded linear programming problems with fuzzy variables based on ranking functions and its application in bounded transportation problems

    NASA Astrophysics Data System (ADS)

    Ebrahimnejad, Ali

    2015-08-01

    There are several methods, in the literature, for solving fuzzy variable linear programming problems (fuzzy linear programming in which the right-hand-side vectors and decision variables are represented by trapezoidal fuzzy numbers). In this paper, the shortcomings of some existing methods are pointed out and to overcome these shortcomings a new method based on the bounded dual simplex method is proposed to determine the fuzzy optimal solution of that kind of fuzzy variable linear programming problems in which some or all variables are restricted to lie within lower and upper bounds. To illustrate the proposed method, an application example is solved and the obtained results are given. The advantages of the proposed method over existing methods are discussed. Also, one application of this algorithm in solving bounded transportation problems with fuzzy supplies and demands is dealt with. The proposed method is easy to understand and to apply for determining the fuzzy optimal solution of bounded fuzzy variable linear programming problems occurring in real-life situations.

  16. Working Group Report: Dark Matter Complementarity (Dark Matter in the Coming Decade: Complementary Paths to Discovery and Beyond)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arrenberg, Sebastian; et al.,

    2013-10-31

    In this Report we discuss the four complementary searches for the identity of dark matter: direct detection experiments that look for dark matter interacting in the lab, indirect detection experiments that connect lab signals to dark matter in our own and other galaxies, collider experiments that elucidate the particle properties of dark matter, and astrophysical probes sensitive to non-gravitational interactions of dark matter. The complementarity among the different dark matter searches is discussed qualitatively and illustrated quantitatively in several theoretical scenarios. Our primary conclusion is that the diversity of possible dark matter candidates requires a balanced program based on allmore » four of those approaches.« less

  17. Insights to primitive replication derived from structures of small oligonucleotides

    NASA Technical Reports Server (NTRS)

    Smith, G. K.; Fox, G. E.

    1995-01-01

    Available information on the structure of small oligonucleotides is surveyed. It is observed that even small oligomers typically exhibit defined structures over a wide range of pH and temperature. These structures rely on a plethora of non-standard base-base interactions in addition to the traditional Watson-Crick pairings. Stable duplexes, though typically antiparallel, can be parallel or staggered and perfect complementarity is not essential. These results imply that primitive template directed reactions do not require high fidelity. Hence, the extensive use of Watson-Crick complementarity in genes rather than being a direct consequence of the primitive condensation process, may instead reflect subsequent selection based on the advantage of accuracy in maintaining the primitive genetic machinery once it arose.

  18. Linear Programming and Its Application to Pattern Recognition Problems

    NASA Technical Reports Server (NTRS)

    Omalley, M. J.

    1973-01-01

    Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.

  19. THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)

    EPA Science Inventory

    This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...

  20. A neural network approach to job-shop scheduling.

    PubMed

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.

  1. A feasible DY conjugate gradient method for linear equality constraints

    NASA Astrophysics Data System (ADS)

    LI, Can

    2017-09-01

    In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.

  2. Numerical approximation for the infinite-dimensional discrete-time optimal linear-quadratic regulator problem

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1986-01-01

    An abstract approximation framework is developed for the finite and infinite time horizon discrete-time linear-quadratic regulator problem for systems whose state dynamics are described by a linear semigroup of operators on an infinite dimensional Hilbert space. The schemes included the framework yield finite dimensional approximations to the linear state feedback gains which determine the optimal control law. Convergence arguments are given. Examples involving hereditary and parabolic systems and the vibration of a flexible beam are considered. Spline-based finite element schemes for these classes of problems, together with numerical results, are presented and discussed.

  3. Linear systems on balancing chemical reaction problem

    NASA Astrophysics Data System (ADS)

    Kafi, R. A.; Abdillah, B.

    2018-01-01

    The concept of linear systems appears in a variety of applications. This paper presents a small sample of the wide variety of real-world problems regarding our study of linear systems. We show that the problem in balancing chemical reaction can be described by homogeneous linear systems. The solution of the systems is obtained by performing elementary row operations. The obtained solution represents the finding coefficients of chemical reaction. In addition, we present a computational calculation to show that mathematical software such as Matlab can be used to simplify completion of the systems, instead of manually using row operations.

  4. Some New Results in Astrophysical Problems of Nonlinear Theory of Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Pikichyan, H. V.

    2017-07-01

    In the interpretation of the observed astrophysical spectra, a decisive role is related to nonlinear problems of radiative transfer, because the processes of multiple interactions of matter of cosmic medium with the exciting intense radiation ubiquitously occur in astrophysical objects, and in their vicinities. Whereas, the intensity of the exciting radiation changes the physical properties of the original medium, and itself was modified, simultaneously, in a self-consistent manner under its influence. In the present report, we show that the consistent application of the principle of invariance in the nonlinear problem of bilateral external illumination of a scattering/absorbing one-dimensional anisotropic medium of finite geometrical thickness allows for simplifications that were previously considered as a prerogative only of linear problems. The nonlinear problem is analyzed through the three methods of the principle of invariance: (i) an adding of layers, (ii) its limiting form, described by differential equations of invariant imbedding, and (iii) a transition to the, so-called, functional equations of the "Ambartsumyan's complete invariance". Thereby, as an alternative to the Boltzmann equation, a new type of equations, so-called "kinetic equations of equivalence", are obtained. By the introduction of new functions - the so-called "linear images" of solution of nonlinear problem of radiative transfer, the linear structure of the solution of the nonlinear problem under study is further revealed. Linear images allow to convert naturally the statistical characteristics of random walk of a "single quantum" or their "beam of unit intensity", as well as widely known "probabilistic interpretation of phenomena of transfer", to the field of nonlinear problems. The structure of the equations obtained for determination of linear images is typical of linear problems.

  5. Substitution and Complementarity of Alcohol and Cannabis: A Review of the Literature.

    PubMed

    Subbaraman, Meenakshi Sabina

    2016-09-18

    Whether alcohol and cannabis are used as substitutes or complements remains debated, and findings across various disciplines have not been synthesized to date. This article is a first step towards organizing the interdisciplinary literature on alcohol and cannabis substitution and complementarity. Electronic searches were performed using PubMed and ISI Web of Knowledge. Behavioral studies of humans with "alcohol" (or "ethanol") and "cannabis" (or "marijuana") and "complement(*)" (or "substitut(*)") in the title or as a keyword were considered. Studies were organized according to sample characteristics (youth, general population, clinical and community-based). These groups were not set a priori, but were informed by the literature review process. Of the 39 studies reviewed, 16 support substitution, ten support complementarity, 12 support neither and one supports both. Results from studies of youth suggest that youth may reduce alcohol in more liberal cannabis environments (substitute), but reduce cannabis in more stringent alcohol environments (complement). Results from the general population suggest that substitution of cannabis for alcohol may occur under more lenient cannabis policies, though cannabis-related laws may affect alcohol use differently across genders and racial groups. Alcohol and cannabis act as both substitutes and complements. Policies aimed at one substance may inadvertently affect consumption of other substances. Future studies should collect fine-grained longitudinal, prospective data from the general population and subgroups of interest, especially in locations likely to legalize cannabis.

  6. Linear and Quadratic Change: A Problem from Japan

    ERIC Educational Resources Information Center

    Peterson, Blake E.

    2006-01-01

    In the fall of 2003, the author conducted research on the student teaching process in Japan. The basis for most of the lessons observed was rich mathematics problems. Upon returning to the US, the author used one such problem while teaching an algebra 2 class. This article introduces that problem, which gives rise to both linear and quadratic…

  7. On some problems of inorganic supramolecular chemistry.

    PubMed

    Pervov, Vladislav S; Zotova, Anna E

    2013-12-02

    In this study, some features that distinguish inorganic supramolecular host-guest objects from traditional architectures are considered. Crystalline inorganic supramolecular structures are the basis for the development of new functional materials. Here, the possible changes in the mechanism of crystalline inorganic supramolecular structure self-organization at high interaction potentials are discussed. The cases of changes in the host structures and corresponding changes in the charge states under guest intercalation, as well as their impact on phase stability and stoichiometry are considered. It was demonstrated that the deviation from the geometrical and topological complementarity conditions may be due to the additional energy gain from forming inorganic supramolecular structures. It has been assumed that molecular recognition principles can be employed for the development of physicochemical analysis and interpretation of metastable states in inorganic crystalline alloys. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Similarity network fusion for aggregating data types on a genomic scale.

    PubMed

    Wang, Bo; Mezlini, Aziz M; Demir, Feyyaz; Fiume, Marc; Tu, Zhuowen; Brudno, Michael; Haibe-Kains, Benjamin; Goldenberg, Anna

    2014-03-01

    Recent technologies have made it cost-effective to collect diverse types of genome-wide data. Computational methods are needed to combine these data to create a comprehensive view of a given disease or a biological process. Similarity network fusion (SNF) solves this problem by constructing networks of samples (e.g., patients) for each available data type and then efficiently fusing these into one network that represents the full spectrum of underlying data. For example, to create a comprehensive view of a disease given a cohort of patients, SNF computes and fuses patient similarity networks obtained from each of their data types separately, taking advantage of the complementarity in the data. We used SNF to combine mRNA expression, DNA methylation and microRNA (miRNA) expression data for five cancer data sets. SNF substantially outperforms single data type analysis and established integrative approaches when identifying cancer subtypes and is effective for predicting survival.

  9. Test of mutually unbiased bases for six-dimensional photonic quantum systems

    PubMed Central

    D'Ambrosio, Vincenzo; Cardano, Filippo; Karimi, Ebrahim; Nagali, Eleonora; Santamato, Enrico; Marrucci, Lorenzo; Sciarrino, Fabio

    2013-01-01

    In quantum information, complementarity of quantum mechanical observables plays a key role. The eigenstates of two complementary observables form a pair of mutually unbiased bases (MUBs). More generally, a set of MUBs consists of bases that are all pairwise unbiased. Except for specific dimensions of the Hilbert space, the maximal sets of MUBs are unknown in general. Even for a dimension as low as six, the identification of a maximal set of MUBs remains an open problem, although there is strong numerical evidence that no more than three simultaneous MUBs do exist. Here, by exploiting a newly developed holographic technique, we implement and test different sets of three MUBs for a single photon six-dimensional quantum state (a “qusix”), encoded exploiting polarization and orbital angular momentum of photons. A close agreement is observed between theory and experiments. Our results can find applications in state tomography, quantitative wave-particle duality, quantum key distribution. PMID:24067548

  10. Test of mutually unbiased bases for six-dimensional photonic quantum systems.

    PubMed

    D'Ambrosio, Vincenzo; Cardano, Filippo; Karimi, Ebrahim; Nagali, Eleonora; Santamato, Enrico; Marrucci, Lorenzo; Sciarrino, Fabio

    2013-09-25

    In quantum information, complementarity of quantum mechanical observables plays a key role. The eigenstates of two complementary observables form a pair of mutually unbiased bases (MUBs). More generally, a set of MUBs consists of bases that are all pairwise unbiased. Except for specific dimensions of the Hilbert space, the maximal sets of MUBs are unknown in general. Even for a dimension as low as six, the identification of a maximal set of MUBs remains an open problem, although there is strong numerical evidence that no more than three simultaneous MUBs do exist. Here, by exploiting a newly developed holographic technique, we implement and test different sets of three MUBs for a single photon six-dimensional quantum state (a "qusix"), encoded exploiting polarization and orbital angular momentum of photons. A close agreement is observed between theory and experiments. Our results can find applications in state tomography, quantitative wave-particle duality, quantum key distribution.

  11. Astrometric Search Method for Individually Resolvable Gravitational Wave Sources with Gaia

    NASA Astrophysics Data System (ADS)

    Moore, Christopher J.; Mihaylov, Deyan P.; Lasenby, Anthony; Gilmore, Gerard

    2017-12-01

    Gravitational waves (GWs) cause the apparent position of distant stars to oscillate with a characteristic pattern on the sky. Astrometric measurements (e.g., those made by Gaia) provide a new way to search for GWs. The main difficulty facing such a search is the large size of the data set; Gaia observes more than one billion stars. In this Letter the problem of searching for GWs from individually resolvable supermassive black hole binaries using astrometry is addressed for the first time; it is demonstrated how the data set can be compressed by a factor of more than 1 06, with a loss of sensitivity of less than 1%. This technique was successfully used to recover artificially injected GW signals from mock Gaia data and to assess the GW sensitivity of Gaia. Throughout the Letter the complementarity of Gaia and pulsar timing searches for GWs is highlighted.

  12. Ensembles and Experiments in Classical and Quantum Physics

    NASA Astrophysics Data System (ADS)

    Neumaier, Arnold

    A philosophically consistent axiomatic approach to classical and quantum mechanics is given. The approach realizes a strong formal implementation of Bohr's correspondence principle. In all instances, classical and quantum concepts are fully parallel: the same general theory has a classical realization and a quantum realization. Extending the ''probability via expectation'' approach of Whittle to noncommuting quantities, this paper defines quantities, ensembles, and experiments as mathematical concepts and shows how to model complementarity, uncertainty, probability, nonlocality and dynamics in these terms. The approach carries no connotation of unlimited repeatability; hence it can be applied to unique systems such as the universe. Consistent experiments provide an elegant solution to the reality problem, confirming the insistence of the orthodox Copenhagen interpretation on that there is nothing but ensembles, while avoiding its elusive reality picture. The weak law of large numbers explains the emergence of classical properties for macroscopic systems.

  13. Perfect mixing of immiscible macromolecules at fluid interfaces

    NASA Astrophysics Data System (ADS)

    Sheiko, Sergei S.; Zhou, Jing; Arnold, Jamie; Neugebauer, Dorota; Matyjaszewski, Krzysztof; Tsitsilianis, Constantinos; Tsukruk, Vladimir V.; Carrillo, Jan-Michael Y.; Dobrynin, Andrey V.; Rubinstein, Michael

    2013-08-01

    The difficulty of mixing chemically incompatible substances—in particular macromolecules and colloidal particles—is a canonical problem limiting advances in fields ranging from health care to materials engineering. Although the self-assembly of chemically different moieties has been demonstrated in coordination complexes, supramolecular structures, and colloidal lattices among other systems, the mechanisms of mixing largely rely on specific interfacing of chemically, physically or geometrically complementary objects. Here, by taking advantage of the steric repulsion between brush-like polymers tethered to surface-active species, we obtained long-range arrays of perfectly mixed macromolecules with a variety of polymer architectures and a wide range of chemistries without the need of encoding specific complementarity. The net repulsion arises from the significant increase in the conformational entropy of the brush-like polymers with increasing distance between adjacent macromolecules at fluid interfaces. This entropic-templating assembly strategy enables long-range patterning of thin films on sub-100 nm length scales.

  14. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    NASA Astrophysics Data System (ADS)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  15. FAST TRACK PAPER: Non-iterative multiple-attenuation methods: linear inverse solutions to non-linear inverse problems - II. BMG approximation

    NASA Astrophysics Data System (ADS)

    Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing

    2004-12-01

    The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.

  16. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  17. Multigrid approaches to non-linear diffusion problems on unstructured meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    The efficiency of three multigrid methods for solving highly non-linear diffusion problems on two-dimensional unstructured meshes is examined. The three multigrid methods differ mainly in the manner in which the nonlinearities of the governing equations are handled. These comprise a non-linear full approximation storage (FAS) multigrid method which is used to solve the non-linear equations directly, a linear multigrid method which is used to solve the linear system arising from a Newton linearization of the non-linear system, and a hybrid scheme which is based on a non-linear FAS multigrid scheme, but employs a linear solver on each level as a smoother. Results indicate that all methods are equally effective at converging the non-linear residual in a given number of grid sweeps, but that the linear solver is more efficient in cpu time due to the lower cost of linear versus non-linear grid sweeps.

  18. Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.

    2010-12-01

    Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.

  19. A Semi-linear Backward Parabolic Cauchy Problem with Unbounded Coefficients of Hamilton–Jacobi–Bellman Type and Applications to Optimal Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Addona, Davide, E-mail: d.addona@campus.unimib.it

    2015-08-15

    We obtain weighted uniform estimates for the gradient of the solutions to a class of linear parabolic Cauchy problems with unbounded coefficients. Such estimates are then used to prove existence and uniqueness of the mild solution to a semi-linear backward parabolic Cauchy problem, where the differential equation is the Hamilton–Jacobi–Bellman equation of a suitable optimal control problem. Via backward stochastic differential equations, we show that the mild solution is indeed the value function of the controlled equation and that the feedback law is verified.

  20. Transductive multi-view zero-shot learning.

    PubMed

    Fu, Yanwei; Hospedales, Timothy M; Xiang, Tao; Gong, Shaogang

    2015-11-01

    Most existing zero-shot learning approaches exploit transfer learning via an intermediate semantic representation shared between an annotated auxiliary dataset and a target dataset with different classes and no annotation. A projection from a low-level feature space to the semantic representation space is learned from the auxiliary dataset and applied without adaptation to the target dataset. In this paper we identify two inherent limitations with these approaches. First, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset/domain are biased when applied directly to the target dataset/domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding, to solve it. The second limitation is the prototype sparsity problem which refers to the fact that for each target class, only a single prototype is available for zero-shot learning given a semantic representation. To overcome this problem, a novel heterogeneous multi-view hypergraph label propagation method is formulated for zero-shot learning in the transductive embedding space. It effectively exploits the complementary information offered by different semantic representations and takes advantage of the manifold structures of multiple representation spaces in a coherent manner. We demonstrate through extensive experiments that the proposed approach (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) significantly outperforms existing methods for both zero-shot and N-shot recognition on three image and video benchmark datasets, and (4) enables novel cross-view annotation tasks.

  1. A class of stochastic optimization problems with one quadratic & several linear objective functions and extended portfolio selection model

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Li, Jun

    2002-09-01

    In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.

  2. Complementarity in Spontaneous Emission: Quantum Jumps, Staggers and Slides

    NASA Astrophysics Data System (ADS)

    Wiseman, H.

    Dan Walls is rightly famous for his part in many of the outstanding developments in quantum optics in the last 30 years. Two of these are most relevant to this paper. The first is the prediction of nonclassical properties of the fluorescence of a two-level atom, such as antibunching [1] and squeezing [2]. Both of these predictions have now been verified experimentally [3,4]. The second is the investigation of fundamental issues such as complementarity and the uncertainty principle [5,6]. This latter area is one which has generated a lively theoretical discussion [7], and, more importantly, suggested new experiments [8]. It was also an area in which I had the honour of working with Dan [9], and of gaining the benefit of his instinct for picking a fruitful line of investigation.

  3. Black hole complementarity in gravity's rainbow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gim, Yongwan; Kim, Wontae, E-mail: yongwan89@sogang.ac.kr, E-mail: wtkim@sogang.ac.kr

    2015-05-01

    To see how the gravity's rainbow works for black hole complementary, we evaluate the required energy for duplication of information in the context of black hole complementarity by calculating the critical value of the rainbow parameter in the certain class of the rainbow Schwarzschild black hole. The resultant energy can be written as the well-defined limit for the vanishing rainbow parameter which characterizes the deformation of the relativistic dispersion relation in the freely falling frame. It shows that the duplication of information in quantum mechanics could not be allowed below a certain critical value of the rainbow parameter; however, itmore » might be possible above the critical value of the rainbow parameter, so that the consistent formulation in our model requires additional constraints or any other resolutions for the latter case.« less

  4. Linear decomposition approach for a class of nonconvex programming problems.

    PubMed

    Shen, Peiping; Wang, Chunfeng

    2017-01-01

    This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.

  5. Mean-square state and parameter estimation for stochastic linear systems with Gaussian and Poisson noises

    NASA Astrophysics Data System (ADS)

    Basin, M.; Maldonado, J. J.; Zendejo, O.

    2016-07-01

    This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.

  6. Precision is essential for efficient catalysis in an evolved Kemp eliminase.

    PubMed

    Blomberg, Rebecca; Kries, Hajo; Pinkas, Daniel M; Mittl, Peer R E; Grütter, Markus G; Privett, Heidi K; Mayo, Stephen L; Hilvert, Donald

    2013-11-21

    Linus Pauling established the conceptual framework for understanding and mimicking enzymes more than six decades ago. The notion that enzymes selectively stabilize the rate-limiting transition state of the catalysed reaction relative to the bound ground state reduces the problem of design to one of molecular recognition. Nevertheless, past attempts to capitalize on this idea, for example by using transition state analogues to elicit antibodies with catalytic activities, have generally failed to deliver true enzymatic rates. The advent of computational design approaches, combined with directed evolution, has provided an opportunity to revisit this problem. Starting from a computationally designed catalyst for the Kemp elimination--a well-studied model system for proton transfer from carbon--we show that an artificial enzyme can be evolved that accelerates an elementary chemical reaction 6 × 10(8)-fold, approaching the exceptional efficiency of highly optimized natural enzymes such as triosephosphate isomerase. A 1.09 Å resolution crystal structure of the evolved enzyme indicates that familiar catalytic strategies such as shape complementarity and precisely placed catalytic groups can be successfully harnessed to afford such high rate accelerations, making us optimistic about the prospects of designing more sophisticated catalysts.

  7. Semilinear programming: applications and implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohan, S.

    Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less

  8. Computer Power. Part 2: Electrical Power Problems and Their Amelioration.

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1989-01-01

    Describes electrical power problems that affect computer users, including spikes, sags, outages, noise, frequency variations, and static electricity. Ways in which these problems may be diagnosed and cured are discussed. Sidebars consider transformers; power distribution units; surge currents/linear and non-linear loads; and sizing the power…

  9. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    ERIC Educational Resources Information Center

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  10. Reduced-Size Integer Linear Programming Models for String Selection Problems: Application to the Farthest String Problem.

    PubMed

    Zörnig, Peter

    2015-08-01

    We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.

  11. Sixth SIAM conference on applied linear algebra: Final program and abstracts. Final technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-12-31

    Linear algebra plays a central role in mathematics and applications. The analysis and solution of problems from an amazingly wide variety of disciplines depend on the theory and computational techniques of linear algebra. In turn, the diversity of disciplines depending on linear algebra also serves to focus and shape its development. Some problems have special properties (numerical, structural) that can be exploited. Some are simply so large that conventional approaches are impractical. New computer architectures motivate new algorithms, and fresh ways to look at old ones. The pervasive nature of linear algebra in analyzing and solving problems means that peoplemore » from a wide spectrum--universities, industrial and government laboratories, financial institutions, and many others--share an interest in current developments in linear algebra. This conference aims to bring them together for their mutual benefit. Abstracts of papers presented are included.« less

  12. Newton's method: A link between continuous and discrete solutions of nonlinear problems

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.

    1980-01-01

    Newton's method for nonlinear mechanics problems replaces the governing nonlinear equations by an iterative sequence of linear equations. When the linear equations are linear differential equations, the equations are usually solved by numerical methods. The iterative sequence in Newton's method can exhibit poor convergence properties when the nonlinear problem has multiple solutions for a fixed set of parameters, unless the iterative sequences are aimed at solving for each solution separately. The theory of the linear differential operators is often a better guide for solution strategies in applying Newton's method than the theory of linear algebra associated with the numerical analogs of the differential operators. In fact, the theory for the differential operators can suggest the choice of numerical linear operators. In this paper the method of variation of parameters from the theory of linear ordinary differential equations is examined in detail in the context of Newton's method to demonstrate how it might be used as a guide for numerical solutions.

  13. Performance and limitations of p-version finite element method for problems containing singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, K.K.; Surana, K.S.

    1996-10-01

    In this paper, the authors investigate the performance of p-version Least Squares Finite Element Formulation (LSFEF) for a hyperbolic system of equations describing a one-dimensional radial flow of an upper-convected Maxwell fluid. This problem has r{sup 2} singularity in stress and r{sup {minus}1} singularity in velocity at r = 0. By carefully controlling the inner radius r{sub j}, Deborah number DE and Reynolds number Re, this problem can be used to simulate the following four classes of problems: (a) smooth linear problems, (b) smooth non-linear problems, (c) singular linear problems and (d) singular non-linear problems. They demonstrate that in casesmore » (a) and (b) the p-version method, in particular p-version LSFEF is meritorious. However, for cases (c) and (d) p-version LSFEF, even with extreme mesh refinement and very high p-levels, either produces wrong solutions, or results in the failure of the iterative solution procedure. Even though in the numerical studies they have considered p-version LSFEF for the radial flow of the upper-convected Maxwell fluid, the findings and conclusions are equally valid for other smooth and singular problems as well, regardless of the formulation strategy chosen and element approximation functions employed.« less

  14. Chromosome structures: reduction of certain problems with unequal gene content and gene paralogs to integer linear programming.

    PubMed

    Lyubetsky, Vassily; Gershgorin, Roman; Gorbunov, Konstantin

    2017-12-06

    Chromosome structure is a very limited model of the genome including the information about its chromosomes such as their linear or circular organization, the order of genes on them, and the DNA strand encoding a gene. Gene lengths, nucleotide composition, and intergenic regions are ignored. Although highly incomplete, such structure can be used in many cases, e.g., to reconstruct phylogeny and evolutionary events, to identify gene synteny, regulatory elements and promoters (considering highly conserved elements), etc. Three problems are considered; all assume unequal gene content and the presence of gene paralogs. The distance problem is to determine the minimum number of operations required to transform one chromosome structure into another and the corresponding transformation itself including the identification of paralogs in two structures. We use the DCJ model which is one of the most studied combinatorial rearrangement models. Double-, sesqui-, and single-operations as well as deletion and insertion of a chromosome region are considered in the model; the single ones comprise cut and join. In the reconstruction problem, a phylogenetic tree with chromosome structures in the leaves is given. It is necessary to assign the structures to inner nodes of the tree to minimize the sum of distances between terminal structures of each edge and to identify the mutual paralogs in a fairly large set of structures. A linear algorithm is known for the distance problem without paralogs, while the presence of paralogs makes it NP-hard. If paralogs are allowed but the insertion and deletion operations are missing (and special constraints are imposed), the reduction of the distance problem to integer linear programming is known. Apparently, the reconstruction problem is NP-hard even in the absence of paralogs. The problem of contigs is to find the optimal arrangements for each given set of contigs, which also includes the mutual identification of paralogs. We proved that these problems can be reduced to integer linear programming formulations, which allows an algorithm to redefine the problems to implement a very special case of the integer linear programming tool. The results were tested on synthetic and biological samples. Three well-known problems were reduced to a very special case of integer linear programming, which is a new method of their solutions. Integer linear programming is clearly among the main computational methods and, as generally accepted, is fast on average; in particular, computation systems specifically targeted at it are available. The challenges are to reduce the size of the corresponding integer linear programming formulations and to incorporate a more detailed biological concept in our model of the reconstruction.

  15. Probing primordial features with future galaxy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballardini, M.; Fedeli, C.; Moscardini, L.

    2016-10-01

    We study the capability of future measurements of the galaxy clustering power spectrum to probe departures from a power-law spectrum for primordial fluctuations. On considering the information from the galaxy clustering power spectrum up to quasi-linear scales, i.e. k < 0.1 h Mpc{sup −1}, we present forecasts for DESI, Euclid and SPHEREx in combination with CMB measurements. As examples of departures in the primordial power spectrum from a simple power-law, we consider four Planck 2015 best-fits motivated by inflationary models with different breaking of the slow-roll approximation. At present, these four representative models provide an improved fit to CMB temperaturemore » anisotropies, although not at statistical significant level. As for other extensions in the matter content of the simplest ΛCDM model, the complementarity of the information in the resulting matter power spectrum expected from these galaxy surveys and in the primordial power spectrum from CMB anisotropies can be effective in constraining cosmological models. We find that the three galaxy surveys can add significant information to CMB to better constrain the extra parameters of the four models considered.« less

  16. PePSS - A portable sky scanner for measuring extremely low night-sky brightness

    NASA Astrophysics Data System (ADS)

    Kocifaj, Miroslav; Kómar, Ladislav; Kundracik, František

    2018-05-01

    A new portable sky scanner designed for low-light-level detection at night is developed and employed in night sky brightness measurements in a rural region. The fast readout, adjustable sensitivity and linear response guaranteed in 5-6 orders of magnitude makes the device well suited for narrow-band photometry in both dark areas and bright urban and suburban environments. Quasi-monochromatic night-sky brightness data are advantageous in the accurate characterization of spectral power distribution of scattered and emitted light and, also allows for the possibility to retrieve light output patterns from whole-city light sources. The sky scanner can operate in both night and day regimes, taking advantage of the complementarity of both radiance data types. Due to its inherent very high sensitivity the photomultiplier tube could be used in night sky radiometry, while the spectrometer-equipped system component capable of detecting elevated intensities is used in daylight monitoring. Daylight is a source of information on atmospheric optical properties that in turn are necessary in processing night sky radiances. We believe that the sky scanner has the potential to revolutionize night-sky monitoring systems.

  17. Can niche plasticity promote biodiversity-productivity relationships through increased complementarity?

    PubMed

    Niklaus, Pascal A; Baruffol, Martin; He, Jin-Sheng; Ma, Keping; Schmid, Bernhard

    2017-04-01

    Most experimental biodiversity-ecosystem functioning research to date has addressed herbaceous plant communities. Comparably little is known about how forest communities will respond to species losses, despite their importance for global biogeochemical cycling. We studied tree species interactions in experimental subtropical tree communities with 33 distinct tree species mixtures and one, two, or four species. Plots were either exposed to natural light levels or shaded. Trees grew rapidly and were intensely competing above ground after 1.5 growing seasons when plots were thinned and the vertical distribution of leaves and wood determined by separating the biomass of harvested trees into 50 cm height increments. Our aim was to analyze effects of species richness in relation to the vertical allocation of leaf biomass and wood, with an emphasis on bipartite competitive interactions among species. Aboveground productivity increased with species richness. The community-level vertical leaf and wood distribution depended on the species composition of communities. Mean height and breadth of species-level vertical leaf and wood distributions did not change with species richness. However, the extra biomass produced by mixtures compared to monocultures of the component species increased when vertical leaf distributions of monocultures were more different. Decomposition of biodiversity effects with the additive partitioning scheme indicated positive complementarity effects that were higher in light than in shade. Selection effects did not deviate from zero, irrespective of light levels. Vertical leaf distributions shifted apart in mixed stands as consequence of competition-driven phenotypic plasticity, promoting realized complementarity. Structural equation models showed that this effect was larger for species that differed more in growth strategies that were characterized by functional traits. In 13 of the 18 investigated two-species mixtures, both species benefitted relative to intraspecific competition in monoculture. In the remaining five pairwise mixtures, the relative yield gain of one species exceeded the relative yield loss of the other species, resulting in a relative yield total (RYT) exceeding 1. Overall, our analysis indicates that richness-productivity relationships are promoted by interspecific niche complementarity at early stages of stand development, and that this effect is enhanced by architectural plasticity. © 2017 by the Ecological Society of America.

  18. Stability Analysis of Finite Difference Schemes for Hyperbolic Systems, and Problems in Applied and Computational Linear Algebra.

    DTIC Science & Technology

    FINITE DIFFERENCE THEORY, * LINEAR ALGEBRA , APPLIED MATHEMATICS, APPROXIMATION(MATHEMATICS), BOUNDARY VALUE PROBLEMS, COMPUTATIONS, HYPERBOLAS, MATHEMATICAL MODELS, NUMERICAL ANALYSIS, PARTIAL DIFFERENTIAL EQUATIONS, STABILITY.

  19. On some problems in a theory of thermally and mechanically interacting continuous media. Ph.D. Thesis; [linearized theory of interacting mixture of elastic solid and viscous fluid

    NASA Technical Reports Server (NTRS)

    Lee, Y. M.

    1971-01-01

    Using a linearized theory of thermally and mechanically interacting mixture of linear elastic solid and viscous fluid, we derive a fundamental relation in an integral form called a reciprocity relation. This reciprocity relation relates the solution of one initial-boundary value problem with a given set of initial and boundary data to the solution of a second initial-boundary value problem corresponding to a different initial and boundary data for a given interacting mixture. From this general integral relation, reciprocity relations are derived for a heat-conducting linear elastic solid, and for a heat-conducting viscous fluid. An initial-boundary value problem is posed and solved for the mixture of linear elastic solid and viscous fluid. With the aid of the Laplace transform and the contour integration, a real integral representation for the displacement of the solid constituent is obtained as one of the principal results of the analysis.

  20. Control problem for a system of linear loaded differential equations

    NASA Astrophysics Data System (ADS)

    Barseghyan, V. R.; Barseghyan, T. V.

    2018-04-01

    The problem of control and optimal control for a system of linear loaded differential equations is considered. Necessary and sufficient conditions for complete controllability and conditions for the existence of a program control and the corresponding motion are formulated. The explicit form of control action for the control problem is constructed and a method for solving the problem of optimal control is proposed.

  1. ORACLS: A system for linear-quadratic-Gaussian control law design

    NASA Technical Reports Server (NTRS)

    Armstrong, E. S.

    1978-01-01

    A modern control theory design package (ORACLS) for constructing controllers and optimal filters for systems modeled by linear time-invariant differential or difference equations is described. Numerical linear-algebra procedures are used to implement the linear-quadratic-Gaussian (LQG) methodology of modern control theory. Algorithms are included for computing eigensystems of real matrices, the relative stability of a matrix, factored forms for nonnegative definite matrices, the solutions and least squares approximations to the solutions of certain linear matrix algebraic equations, the controllability properties of a linear time-invariant system, and the steady state covariance matrix of an open-loop stable system forced by white noise. Subroutines are provided for solving both the continuous and discrete optimal linear regulator problems with noise free measurements and the sampled-data optimal linear regulator problem. For measurement noise, duality theory and the optimal regulator algorithms are used to solve the continuous and discrete Kalman-Bucy filter problems. Subroutines are also included which give control laws causing the output of a system to track the output of a prescribed model.

  2. EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.

    ERIC Educational Resources Information Center

    Jarvis, John J.; And Others

    Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…

  3. Updating QR factorization procedure for solution of linear least squares problem with equality constraints.

    PubMed

    Zeb, Salman; Yousaf, Muhammad

    2017-01-01

    In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.

  4. Single-machine common/slack due window assignment problems with linear decreasing processing times

    NASA Astrophysics Data System (ADS)

    Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia

    2017-08-01

    This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.

  5. Krylov subspace methods - Theory, algorithms, and applications

    NASA Technical Reports Server (NTRS)

    Sad, Youcef

    1990-01-01

    Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.

  6. Fuzzy bi-objective linear programming for portfolio selection problem with magnitude ranking function

    NASA Astrophysics Data System (ADS)

    Kusumawati, Rosita; Subekti, Retno

    2017-04-01

    Fuzzy bi-objective linear programming (FBOLP) model is bi-objective linear programming model in fuzzy number set where the coefficients of the equations are fuzzy number. This model is proposed to solve portfolio selection problem which generate an asset portfolio with the lowest risk and the highest expected return. FBOLP model with normal fuzzy numbers for risk and expected return of stocks is transformed into linear programming (LP) model using magnitude ranking function.

  7. An Introduction to Multilinear Formula Score Theory. Measurement Series 84-4.

    ERIC Educational Resources Information Center

    Levine, Michael V.

    Formula score theory (FST) associates each multiple choice test with a linear operator and expresses all of the real functions of item response theory as linear combinations of the operator's eigenfunctions. Hard measurement problems can then often be reformulated as easier, standard mathematical problems. For example, the problem of estimating…

  8. A systematic linear space approach to solving partially described inverse eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Hu, Sau-Lon James; Li, Haujun

    2008-06-01

    Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.

  9. A Linear Programming Approach to Routing Control in Networks of Constrained Nonlinear Positive Systems with Concave Flow Rates

    NASA Technical Reports Server (NTRS)

    Arneson, Heather M.; Dousse, Nicholas; Langbort, Cedric

    2014-01-01

    We consider control design for positive compartmental systems in which each compartment's outflow rate is described by a concave function of the amount of material in the compartment.We address the problem of determining the routing of material between compartments to satisfy time-varying state constraints while ensuring that material reaches its intended destination over a finite time horizon. We give sufficient conditions for the existence of a time-varying state-dependent routing strategy which ensures that the closed-loop system satisfies basic network properties of positivity, conservation and interconnection while ensuring that capacity constraints are satisfied, when possible, or adjusted if a solution cannot be found. These conditions are formulated as a linear programming problem. Instances of this linear programming problem can be solved iteratively to generate a solution to the finite horizon routing problem. Results are given for the application of this control design method to an example problem. Key words: linear programming; control of networks; positive systems; controller constraints and structure.

  10. Quantification of in vivo site-specific Asp isomerization and Asn deamidation of mAbs in animal serum using IP-LC-MS.

    PubMed

    Mehl, John T; Sleczka, Bogdan G; Ciccimaro, Eugene F; Kozhich, Alexander T; Gilbertson, Deb G; Vuppugalla, Ragini; Huang, Christine S; Stevens, Brenda; Mo, Jingjie; Deyanova, Ekaterina G; Wang, Yun; Huang, Richard Yc; Chen, Guodong; Olah, Timothy V

    2016-08-01

    Isomerization of aspartic acid and deamidation of asparagine are two common amino acid modifications that are of particular concern if located within the complementarity-determining region of therapeutic antibodies. Questions arise as to the extent of modification occurring in circulation due to potential exposure of the therapeutic antibody to different pH regimes. To enable evaluation of site-specific isomerization and deamidation of human mAbs in vivo, immunoprecipitation (IP) has been combined with LC-MS providing selective enrichment, separation and detection of naive and modified forms of tryptic peptides comprising complementarity-determining region sequences. IP-LC-MS can be applied to simultaneously quantify in vivo drug concentrations and measure the extent of isomerization or deamidation in PK studies conducted during the drug discovery stage.

  11. Household adoption of energy and water-efficient appliances: An analysis of attitudes, labelling and complementary green behaviours in selected OECD countries.

    PubMed

    Dieu-Hang, To; Grafton, R Quentin; Martínez-Espiñeira, Roberto; Garcia-Valiñas, Maria

    2017-07-15

    Using a household-based data set of more than 12,000 households from 11 OECD countries, we analyse the factors underlying the decision by households to adopt energy-efficient and water-efficient equipment. We evaluate the roles of both attitudes and labelling schemes on the adoption of energy and water-efficient equipment, and also the interaction and complementarity between energy and water conservation behaviours. Our findings show: one, 'green' social norms and favourable attitudes towards the environment are associated with an increased likelihood of households' adoption of energy and water-efficient appliances; two, households' purchase decisions are positively affected by their awareness, understanding, and trust of labelling schemes; and three, there is evidence of complementarity between energy conservation and water conservation behaviours. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A wave function for stock market returns

    NASA Astrophysics Data System (ADS)

    Ataullah, Ali; Davidson, Ian; Tippett, Mark

    2009-02-01

    The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.

  13. "It's best not to think about it at all-like the new taxes": Reality, observer, and complementarity in Bohr and Pauli

    NASA Astrophysics Data System (ADS)

    Plotnitsky, Arkady

    2012-12-01

    This article considers the concepts of reality, observer, and complementarity in Pauli and Bohr, and the similarities and, especially, differences in their understanding of these concepts, differences defined most essentially by their respective views of the role of the human observer in quantum measurement. These differences are significant even in the case of their respective interpretations of quantum phenomena and quantum mechanics, where the influence of Bohr's ideas on Pauli's understanding of quantum physics is particularly strong. They become especially strong and even radical in the case of their overall philosophical visions, where the impact of Jungean psychology, coupled to that of the earlier archetypal thinking of such figures as Kepler and Fludd, drives Pauli's thinking ever further away from that of Bohr.

  14. Accelerator and reactor complementarity in coherent neutrino-nucleus scattering

    NASA Astrophysics Data System (ADS)

    Dent, James B.; Dutta, Bhaskar; Liao, Shu; Newstead, Jayden L.; Strigari, Louis E.; Walker, Joel W.

    2018-02-01

    We study the complementarity between accelerator and reactor coherent elastic neutrino-nucleus elastic scattering (CE ν NS ) experiments for constraining new physics in the form of nonstandard neutrino interactions (NSI). First, considering just data from the recent observation by the Coherent experiment, we explore interpretive degeneracies that emerge when activating either two or four unknown NSI parameters. Next, we demonstrate that simultaneous treatment of reactor and accelerator experiments, each employing at least two distinct target materials, can break a degeneracy between up and down flavor-diagonal NSI terms that survives analysis of neutrino oscillation experiments. Considering four flavor-diagonal (e e /μ μ ) up- and down-type NSI parameters, we find that all terms can be measured with high local precision (to a width as small as ˜5 % in Fermi units) by next-generation experiments, although discrete reflection ambiguities persist.

  15. An evolutionary conserved pattern of 18S rRNA sequence complementarity to mRNA 5′ UTRs and its implications for eukaryotic gene translation regulation

    PubMed Central

    Pánek, Josef; Kolář, Michal; Vohradský, Jiří; Shivaya Valášek, Leoš

    2013-01-01

    There are several key mechanisms regulating eukaryotic gene expression at the level of protein synthesis. Interestingly, the least explored mechanisms of translational control are those that involve the translating ribosome per se, mediated for example via predicted interactions between the ribosomal RNAs (rRNAs) and mRNAs. Here, we took advantage of robustly growing large-scale data sets of mRNA sequences for numerous organisms, solved ribosomal structures and computational power to computationally explore the mRNA–rRNA complementarity that is statistically significant across the species. Our predictions reveal highly specific sequence complementarity of 18S rRNA sequences with mRNA 5′ untranslated regions (UTRs) forming a well-defined 3D pattern on the rRNA sequence of the 40S subunit. Broader evolutionary conservation of this pattern may imply that 5′ UTRs of eukaryotic mRNAs, which have already emerged from the mRNA-binding channel, may contact several complementary spots on 18S rRNA situated near the exit of the mRNA binding channel and on the middle-to-lower body of the solvent-exposed 40S ribosome including its left foot. We discuss physiological significance of this structurally conserved pattern and, in the context of previously published experimental results, propose that it modulates scanning of the 40S subunit through 5′ UTRs of mRNAs. PMID:23804757

  16. Rescuing complementarity with little drama

    NASA Astrophysics Data System (ADS)

    Bao, Ning; Bouland, Adam; Chatwin-Davies, Aidan; Pollack, Jason; Yuen, Henry

    2016-12-01

    The AMPS paradox challenges black hole complementarity by apparently constructing a way for an observer to bring information from the outside of the black hole into its interior if there is no drama at its horizon, making manifest a violation of monogamy of entanglement. We propose a new resolution to the paradox: this violation cannot be explicitly checked by an infalling observer in the finite proper time they have to live after crossing the horizon. Our resolution depends on a weak relaxation of the no-drama condition (we call it "little-drama") which is the "complementarity dual" of scrambling of information on the stretched horizon. When translated to the description of the black hole interior, this implies that the fine-grained quantum information of infalling matter is rapidly diffused across the entire interior while classical observables and coarse-grained geometry remain unaffected. Under the assumption that information has diffused throughout the interior, we consider the difficulty of the information-theoretic task that an observer must perform after crossing the event horizon of a Schwarzschild black hole in order to verify a violation of monogamy of entanglement. We find that the time required to complete a necessary subroutine of this task, namely the decoding of Bell pairs from the interior and the late radiation, takes longer than the maximum amount of time that an observer can spend inside the black hole before hitting the singularity. Therefore, an infalling observer cannot observe monogamy violation before encountering the singularity.

  17. AVC: Selecting discriminative features on basis of AUC by maximizing variable complementarity.

    PubMed

    Sun, Lei; Wang, Jun; Wei, Jinmao

    2017-03-14

    The Receiver Operator Characteristic (ROC) curve is well-known in evaluating classification performance in biomedical field. Owing to its superiority in dealing with imbalanced and cost-sensitive data, the ROC curve has been exploited as a popular metric to evaluate and find out disease-related genes (features). The existing ROC-based feature selection approaches are simple and effective in evaluating individual features. However, these approaches may fail to find real target feature subset due to their lack of effective means to reduce the redundancy between features, which is essential in machine learning. In this paper, we propose to assess feature complementarity by a trick of measuring the distances between the misclassified instances and their nearest misses on the dimensions of pairwise features. If a misclassified instance and its nearest miss on one feature dimension are far apart on another feature dimension, the two features are regarded as complementary to each other. Subsequently, we propose a novel filter feature selection approach on the basis of the ROC analysis. The new approach employs an efficient heuristic search strategy to select optimal features with highest complementarities. The experimental results on a broad range of microarray data sets validate that the classifiers built on the feature subset selected by our approach can get the minimal balanced error rate with a small amount of significant features. Compared with other ROC-based feature selection approaches, our new approach can select fewer features and effectively improve the classification performance.

  18. Rescuing complementarity with little drama

    DOE PAGES

    Bao, Ning; Bouland, Adam; Chatwin-Davies, Aidan; ...

    2016-12-07

    The AMPS paradox challenges black hole complementarity by apparently constructing a way for an observer to bring information from the outside of the black hole into its interior if there is no drama at its horizon, making manifest a violation of monogamy of entanglement. We propose a new resolution to the paradox: this violation cannot be explicitly checked by an infalling observer in the finite proper time they have to live after crossing the horizon. Our resolution depends on a weak relaxation of the no-drama condition (we call it “little-drama”) which is the “complementarity dual” of scrambling of information onmore » the stretched horizon. When translated to the description of the black hole interior, this implies that the fine-grained quantum information of infalling matter is rapidly diffused across the entire interior while classical observables and coarse-grained geometry remain unaffected. Under the assumption that information has diffused throughout the interior, we consider the difficulty of the information-theoretic task that an observer must perform after crossing the event horizon of a Schwarzschild black hole in order to verify a violation of monogamy of entanglement. We find that the time required to complete a necessary subroutine of this task, namely the decoding of Bell pairs from the interior and the late radiation, takes longer than the maximum amount of time that an observer can spend inside the black hole before hitting the singularity. Furthermore, an infalling observer cannot observe monogamy violation before encountering the singularity.« less

  19. Categorizing the telehealth policy response of countries and their implications for complementarity of telehealth policy.

    PubMed

    Varghese, Sunil; Scott, Richard E

    2004-01-01

    Developing countries are exploring the role of telehealth to overcome the challenges of providing adequate health care services. However, this process faces disparities, and no complementarity in telehealth policy development. Telehealth has the potential to transcend geopolitical boundaries, yet telehealth policy developed in one jurisdiction may hamper applications in another. Understanding such policy complexities is essential for telehealth to realize its full global potential. This study investigated 12 East Asian countries that may represent a microcosm of the world, to determine if the telehealth policy response of countries could be categorized, and whether any implications could be identified for the development of complementary telehealth policy. The countries were Cambodia, China, Hong Kong, Indonesia, Japan, Malaysia, Myanmar, Singapore, South Korea, Taiwan, Thailand, and Vietnam. Three categories of country response were identified in regard to national policy support and development. The first category was "None" (Cambodia, Myanmar, and Vietnam) where international partners, driven by humanitarian concerns, lead telehealth activity. The second category was "Proactive" (China, Indonesia, Malaysia, Singapore, South Korea, Taiwan, and Thailand) where national policies were designed with the view that telehealth initiatives are a component of larger development objectives. The third was "Reactive" (Hong Kong and Japan), where policies were only proffered after telehealth activities were sustainable. It is concluded that although complementarity of telehealth policy development is not occurring, increased interjurisdictional telehealth activity, regional clusters, and concerted and coordinated effort amongst researchers, practitioners, and policy makers may alter this trend.

  20. Substitution and complementarity of alcohol and cannabis: A review of the literature

    PubMed Central

    2016-01-01

    Background Whether alcohol and cannabis are used as substitutes or complements remains debated, and findings across various disciplines have not been synthesized to date. Objective This paper is a first step towards organizing the interdisciplinary literature on alcohol and cannabis substitution and complementarity. Method Electronic searches were performed using PubMed and ISI Web of Knowledge. Behavioral studies of humans with ‘alcohol’ (or ‘ethanol’) and ‘cannabis’ (or ‘marijuana”) and ‘complement*’ (or ‘substitut*’) in the title or as a keyword were considered. Studies were organized according to sample characteristics (youth, general population, clinical and community-based). These groups were not set a priori, but were informed by the literature review process. Results Of the 39 studies reviewed, 16 support substitution, ten support complementarity, 12 support neither and one supports both. Results from studies of youth suggest that youth may reduce alcohol in more liberal cannabis environments (substitute), but reduce cannabis in more stringent alcohol environments (complement). Results from the general population suggest that substitution of cannabis for alcohol may occur under more lenient cannabis policies, though cannabis-related laws may affect alcohol use differently across genders and racial groups. Conclusions Alcohol and cannabis act as both substitutes and complements. Policies aimed at one substance may inadvertently affect consumption of other substances. Future studies should collect fine-grained longitudinal, prospective data from the general population and subgroups of interest, especially in locations likely to legalize cannabis. PMID:27249324

  1. Investigating the linearity assumption between lumber grade mix and yield using design of experiments (DOE)

    Treesearch

    Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas

    2004-01-01

    Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...

  2. Deformed Palmprint Matching Based on Stable Regions.

    PubMed

    Wu, Xiangqian; Zhao, Qiushi

    2015-12-01

    Palmprint recognition (PR) is an effective technology for personal recognition. A main problem, which deteriorates the performance of PR, is the deformations of palmprint images. This problem becomes more severe on contactless occasions, in which images are acquired without any guiding mechanisms, and hence critically limits the applications of PR. To solve the deformation problems, in this paper, a model for non-linearly deformed palmprint matching is derived by approximating non-linear deformed palmprint images with piecewise-linear deformed stable regions. Based on this model, a novel approach for deformed palmprint matching, named key point-based block growing (KPBG), is proposed. In KPBG, an iterative M-estimator sample consensus algorithm based on scale invariant feature transform features is devised to compute piecewise-linear transformations to approximate the non-linear deformations of palmprints, and then, the stable regions complying with the linear transformations are decided using a block growing algorithm. Palmprint feature extraction and matching are performed over these stable regions to compute matching scores for decision. Experiments on several public palmprint databases show that the proposed models and the KPBG approach can effectively solve the deformation problem in palmprint verification and outperform the state-of-the-art methods.

  3. A Resume of Stochastic, Time-Varying, Linear System Theory with Application to Active-Sonar Signal-Processing Problems

    DTIC Science & Technology

    1981-06-15

    relationships 5 3. Normalized energy in ambiguity function for i = 0 14 k ilI SACLANTCEN SR-50 A RESUME OF STOCHASTIC, TIME-VARYING, LINEAR SYSTEM THEORY WITH...the order in which systems are concatenated is unimportant. These results are exactly analogous to the results of time-invariant linear system theory in...REFERENCES 1. MEIER, L. A rdsum6 of deterministic time-varying linear system theory with application to active sonar signal processing problems, SACLANTCEN

  4. Digital program for solving the linear stochastic optimal control and estimation problem

    NASA Technical Reports Server (NTRS)

    Geyser, L. C.; Lehtinen, B.

    1975-01-01

    A computer program is described which solves the linear stochastic optimal control and estimation (LSOCE) problem by using a time-domain formulation. The LSOCE problem is defined as that of designing controls for a linear time-invariant system which is disturbed by white noise in such a way as to minimize a performance index which is quadratic in state and control variables. The LSOCE problem and solution are outlined; brief descriptions are given of the solution algorithms, and complete descriptions of each subroutine, including usage information and digital listings, are provided. A test case is included, as well as information on the IBM 7090-7094 DCS time and storage requirements.

  5. Real-time solution of linear computational problems using databases of parametric reduced-order models with arbitrary underlying meshes

    NASA Astrophysics Data System (ADS)

    Amsallem, David; Tezaur, Radek; Farhat, Charbel

    2016-12-01

    A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.

  6. Simultaneous source and attenuation reconstruction in SPECT using ballistic and single scattering data

    NASA Astrophysics Data System (ADS)

    Courdurier, M.; Monard, F.; Osses, A.; Romero, F.

    2015-09-01

    In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.

  7. On the Numerical Formulation of Parametric Linear Fractional Transformation (LFT) Uncertainty Models for Multivariate Matrix Polynomial Problems

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.

    1998-01-01

    Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.

  8. Linear decentralized systems with special structure. [for twin lift helicopters

    NASA Technical Reports Server (NTRS)

    Martin, C. F.

    1982-01-01

    Certain fundamental structures associated with linear systems having internal symmetries are outlined. It is shown that the theory of finite-dimensional algebras and their representations are closely related to such systems. It is also demonstrated that certain problems in the decentralized control of symmetric systems are equivalent to long-standing problems of linear systems theory. Even though the structure imposed arose in considering the problems of twin-lift helicopters, any large system composed of several identical intercoupled control systems can be modeled by a linear system that satisfies the constraints imposed. Internal symmetry can be exploited to yield new system-theoretic invariants and a better understanding of the way in which the underlying structure affects overall system performance.

  9. Finite Element Based Structural Damage Detection Using Artificial Boundary Conditions

    DTIC Science & Technology

    2007-09-01

    C. (2005). Elementary Linear Algebra . New York: John Wiley and Sons. Avitable, Peter (2001, January) Experimental Modal Analysis, A Simple Non...variables under consideration. 3 Frequency sensitivities are the basis for a linear approximation to compute the change in the natural frequencies of a...THEORY The general problem statement for a non- linear constrained optimization problem is: To minimize ( )f x Objective Function Subject to

  10. Metabolic Complementarity and Genomics of the Dual Bacterial Symbiosis of Sharpshooters

    PubMed Central

    Wu, Dongying; Daugherty, Sean C; Van Aken, Susan E; Pai, Grace H; Watkins, Kisha L; Khouri, Hoda; Tallon, Luke J; Zaborsky, Jennifer M; Dunbar, Helen E; Tran, Phat L; Moran, Nancy A

    2006-01-01

    Mutualistic intracellular symbiosis between bacteria and insects is a widespread phenomenon that has contributed to the global success of insects. The symbionts, by provisioning nutrients lacking from diets, allow various insects to occupy or dominate ecological niches that might otherwise be unavailable. One such insect is the glassy-winged sharpshooter (Homalodisca coagulata), which feeds on xylem fluid, a diet exceptionally poor in organic nutrients. Phylogenetic studies based on rRNA have shown two types of bacterial symbionts to be coevolving with sharpshooters: the gamma-proteobacterium Baumannia cicadellinicola and the Bacteroidetes species Sulcia muelleri. We report here the sequencing and analysis of the 686,192–base pair genome of B. cicadellinicola and approximately 150 kilobase pairs of the small genome of S. muelleri, both isolated from H. coagulata. Our study, which to our knowledge is the first genomic analysis of an obligate symbiosis involving multiple partners, suggests striking complementarity in the biosynthetic capabilities of the two symbionts: B. cicadellinicola devotes a substantial portion of its genome to the biosynthesis of vitamins and cofactors required by animals and lacks most amino acid biosynthetic pathways, whereas S. muelleri apparently produces most or all of the essential amino acids needed by its host. This finding, along with other results of our genome analysis, suggests the existence of metabolic codependency among the two unrelated endosymbionts and their insect host. This dual symbiosis provides a model case for studying correlated genome evolution and genome reduction involving multiple organisms in an intimate, obligate mutualistic relationship. In addition, our analysis provides insight for the first time into the differences in symbionts between insects (e.g., aphids) that feed on phloem versus those like H. coagulata that feed on xylem. Finally, the genomes of these two symbionts provide potential targets for controlling plant pathogens such as Xylella fastidiosa, a major agroeconomic problem, for which H. coagulata and other sharpshooters serve as vectors of transmission. PMID:16729848

  11. Steady induction effects in geomagnetism. Part 1B: Geomagnetic estimation of steady surficial core motions: A non-linear inverse problem

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1993-01-01

    The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.

  12. Frequency assignments for HFDF receivers in a search and rescue network

    NASA Astrophysics Data System (ADS)

    Johnson, Krista E.

    1990-03-01

    This thesis applies a multiobjective linear programming approach to the problem of assigning frequencies to high frequency direction finding (HFDF) receivers in a search-and-rescue network in order to maximize the expected number of geolocations of vessels in distress. The problem is formulated as a multiobjective integer linear programming problem. The integrality of the solutions is guaranteed by the totally unimodularity of the A-matrix. Two approaches are taken to solve the multiobjective linear programming problem: (1) the multiobjective simplex method as implemented in ADBASE; and (2) an iterative approach. In this approach, the individual objective functions are weighted and combined in a single additive objective function. The resulting single objective problem is expressed as a network programming problem and solved using SAS NETFLOW. The process is then repeated with different weightings for the objective functions. The solutions obtained from the multiobjective linear programs are evaluated using a FORTRAN program to determine which solution provides the greatest expected number of geolocations. This solution is then compared to the sample mean and standard deviation for the expected number of geolocations resulting from 10,000 random frequency assignments for the network.

  13. Eigensensitivity analysis of rotating clamped uniform beams with the asymptotic numerical method

    NASA Astrophysics Data System (ADS)

    Bekhoucha, F.; Rechak, S.; Cadou, J. M.

    2016-12-01

    In this paper, free vibrations of a rotating clamped Euler-Bernoulli beams with uniform cross section are studied using continuation method, namely asymptotic numerical method. The governing equations of motion are derived using Lagrange's method. The kinetic and strain energy expression are derived from Rayleigh-Ritz method using a set of hybrid variables and based on a linear deflection assumption. The derived equations are transformed in two eigenvalue problems, where the first is a linear gyroscopic eigenvalue problem and presents the coupled lagging and stretch motions through gyroscopic terms. While the second is standard eigenvalue problem and corresponds to the flapping motion. Those two eigenvalue problems are transformed into two functionals treated by continuation method, the Asymptotic Numerical Method. New method proposed for the solution of the linear gyroscopic system based on an augmented system, which transforms the original problem to a standard form with real symmetric matrices. By using some techniques to resolve these singular problems by the continuation method, evolution curves of the natural frequencies against dimensionless angular velocity are determined. At high angular velocity, some singular points, due to the linear elastic assumption, are computed. Numerical tests of convergence are conducted and the obtained results are compared to the exact values. Results obtained by continuation are compared to those computed with discrete eigenvalue problem.

  14. A New Pattern of Getting Nasty Number in Graphical Method

    NASA Astrophysics Data System (ADS)

    Sumathi, P.; Indhumathi, N.

    2018-04-01

    This paper proposed a new technique of getting nasty numbers using graphical method in linear programming problem and it has been proved for various Linear programming problems. And also some characterisation of nasty numbers is discussed in this paper.

  15. Optimal blood glucose control in diabetes mellitus treatment using dynamic programming based on Ackerman’s linear model

    NASA Astrophysics Data System (ADS)

    Pradanti, Paskalia; Hartono

    2018-03-01

    Determination of insulin injection dose in diabetes mellitus treatment can be considered as an optimal control problem. This article is aimed to simulate optimal blood glucose control for patient with diabetes mellitus. The blood glucose regulation of diabetic patient is represented by Ackerman’s Linear Model. This problem is then solved using dynamic programming method. The desired blood glucose level is obtained by minimizing the performance index in Lagrange form. The results show that dynamic programming based on Ackerman’s Linear Model is quite good to solve the problem.

  16. Program for the solution of multipoint boundary value problems of quasilinear differential equations

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Linear equations are solved by a method of superposition of solutions of a sequence of initial value problems. For nonlinear equations and/or boundary conditions, the solution is iterative and in each iteration a problem like the linear case is solved. A simple Taylor series expansion is used for the linearization of both nonlinear equations and nonlinear boundary conditions. The perturbation method of solution is used in preference to quasilinearization because of programming ease, and smaller storage requirements; and experiments indicate that the desired convergence properties exist although no proof or convergence is given.

  17. The Vertical Linear Fractional Initialization Problem

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Hartley, Tom T.

    1999-01-01

    This paper presents a solution to the initialization problem for a system of linear fractional-order differential equations. The scalar problem is considered first, and solutions are obtained both generally and for a specific initialization. Next the vector fractional order differential equation is considered. In this case, the solution is obtained in the form of matrix F-functions. Some control implications of the vector case are discussed. The suggested method of problem solution is shown via an example.

  18. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  19. A study of the use of linear programming techniques to improve the performance in design optimization problems

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.

  20. Analysis and control of hourglass instabilities in underintegrated linear and nonlinear elasticity

    NASA Technical Reports Server (NTRS)

    Jacquotte, Olivier P.; Oden, J. Tinsley

    1994-01-01

    Methods are described to identify and correct a bad finite element approximation of the governing operator obtained when under-integration is used in numerical code for several model problems: the Poisson problem, the linear elasticity problem, and for problems in the nonlinear theory of elasticity. For each of these problems, the reason for the occurrence of instabilities is given, a way to control or eliminate them is presented, and theorems of existence, uniqueness, and convergence for the given methods are established. Finally, numerical results are included which illustrate the theory.

  1. Complementarity of information and the emergence of the classical world

    NASA Astrophysics Data System (ADS)

    Zwolak, Michael; Zurek, Wojciech

    2013-03-01

    We prove an anti-symmetry property relating accessible information about a system through some auxiliary system F and the quantum discord with respect to a complementary system F'. In Quantum Darwinism, where fragments of the environment relay information to observers - this relation allows us to understand some fundamental properties regarding correlations between a quantum system and its environment. First, it relies on a natural separation of accessible information and quantum information about a system. Under decoherence, this separation shows that accessible information is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. Second, It shows that objective information becomes accessible to many observers only when quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. The resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality, and supports Bohr's intuition that quantum phenomena acquire classical reality only when communicated.

  2. de Sitter space as a tensor network: Cosmic no-hair, complementarity, and complexity

    NASA Astrophysics Data System (ADS)

    Bao, Ning; Cao, ChunJun; Carroll, Sean M.; Chatwin-Davies, Aidan

    2017-12-01

    We investigate the proposed connection between de Sitter spacetime and the multiscale entanglement renormalization ansatz (MERA) tensor network, and ask what can be learned via such a construction. We show that the quantum state obeys a cosmic no-hair theorem: the reduced density operator describing a causal patch of the MERA asymptotes to a fixed point of a quantum channel, just as spacetimes with a positive cosmological constant asymptote to de Sitter space. The MERA is potentially compatible with a weak form of complementarity (local physics only describes single patches at a time, but the overall Hilbert space is infinite dimensional) or, with certain specific modifications to the tensor structure, a strong form (the entire theory describes only a single patch plus its horizon, in a finite-dimensional Hilbert space). We also suggest that de Sitter evolution has an interpretation in terms of circuit complexity, as has been conjectured for anti-de Sitter space.

  3. Comment on measuring the tt forward-backward asymmetry at ATLAS and CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arguin, Jean-Francois; Ligeti, Zoltan; Freytsis, Marat

    2011-10-01

    We suggest a new possibility for ATLAS and CMS to explore the tt forward-backward asymmetry measured at the Tevatron, by attempting to reconstruct tt events, with one of the tops decaying semileptonically in the central region (|{eta}|<2.5) and the other decaying hadronically in the forward region (|{eta}|>2.5). For several models which give comparable Tevatron signals, we study the charge asymmetry at the LHC as a function of cuts on |{eta}| and on the tt invariant mass, m{sub tt}. We show that there is an interesting complementarity between cuts on |{eta}| and m{sub tt} to suppress the dominant and symmetric gg{yields}ttmore » rate, and different combinations of cuts enhance the distinguishing power between models. This complementarity is likely to hold in other new physics scenarios as well, which affect the tt cross section, so it motivates extending tt reconstruction to higher |{eta}|.« less

  4. Theory of Excitonic Delocalization for Robust Vibronic Dynamics in LH2.

    PubMed

    Caycedo-Soler, Felipe; Lim, James; Oviedo-Casado, Santiago; van Hulst, Niek F; Huelga, Susana F; Plenio, Martin B

    2018-06-11

    Nonlinear spectroscopy has revealed long-lasting oscillations in the optical response of a variety of photosynthetic complexes. Different theoretical models that involve the coherent coupling of electronic (excitonic) or electronic-vibrational (vibronic) degrees of freedom have been put forward to explain these observations. The ensuing debate concerning the relevance of either mechanism may have obscured their complementarity. To illustrate this balance, we quantify how the excitonic delocalization in the LH2 unit of Rhodopseudomonas acidophila purple bacterium leads to correlations of excitonic energy fluctuations, relevant coherent vibronic coupling, and importantly, a decrease in the excitonic dephasing rates. Combining these effects, we identify a feasible origin for the long-lasting oscillations observed in fluorescent traces from time-delayed two-pulse single-molecule experiments performed on this photosynthetic complex and use this approach to discuss the role of this complementarity in other photosynthetic systems.

  5. Complementarity of quantum discord and classically accessible information

    DOE PAGES

    Zwolak, Michael P.; Zurek, Wojciech H.

    2013-05-20

    The sum of the Holevo quantity (that bounds the capacity of quantum channels to transmit classical information about an observable) and the quantum discord (a measure of the quantumness of correlations of that observable) yields an observable-independent total given by the quantum mutual information. This split naturally delineates information about quantum systems accessible to observers – information that is redundantly transmitted by the environment – while showing that it is maximized for the quasi-classical pointer observable. Other observables are accessible only via correlations with the pointer observable. In addition, we prove an anti-symmetry property relating accessible information and discord. Itmore » shows that information becomes objective – accessible to many observers – only as quantum information is relegated to correlations with the global environment, and, therefore, locally inaccessible. Lastly, the resulting complementarity explains why, in a quantum Universe, we perceive objective classical reality while flagrantly quantum superpositions are out of reach.« less

  6. Power Moves Beyond Complementarity: A Staring Look Elicits Avoidance in Low Power Perceivers and Approach in High Power Perceivers

    PubMed Central

    Weick, Mario; McCall, Cade; Blascovich, Jim

    2017-01-01

    Sustained, direct eye-gaze—staring—is a powerful cue that elicits strong responses in many primate and nonprimate species. The present research examined whether fleeting experiences of high and low power alter individuals’ spontaneous responses to the staring gaze of an onlooker. We report two experimental studies showing that sustained, direct gaze elicits spontaneous avoidance tendencies in low power perceivers and spontaneous approach tendencies in high power perceivers. These effects emerged during interactions with different targets and when power was manipulated between-individuals (Study 1) and within-individuals (Study 2), thus attesting to a high degree of flexibility in perceivers’ reactions to gaze cues. Together, the present findings indicate that power can break the cycle of complementarity in individuals’ spontaneous responding: Low power perceivers complement and move away from, and high power perceivers reciprocate and move toward, staring onlookers. PMID:28903712

  7. Extended Minus-Strand DNA as Template for R-U5-Mediated Second-Strand Transfer in Recombinational Rescue of Primer Binding Site-Modified Retroviral Vectors

    PubMed Central

    Mikkelsen, Jacob Giehm; Lund, Anders H.; Dybkær, Karen; Duch, Mogens; Pedersen, Finn Skou

    1998-01-01

    We have previously demonstrated recombinational rescue of primer binding site (PBS)-impaired Akv murine leukemia virus-based vectors involving initial priming on endogenous viral sequences and template switching during cDNA synthesis to obtain PBS complementarity in second-strand transfer of reverse transcription (Mikkelsen et al., J. Virol. 70:1439–1447, 1996). By use of the same forced recombination system, we have now found recombinant proviruses of different structures, suggesting that PBS knockout vectors may be rescued through initial priming on endogenous virus RNA, read-through of the mutated PBS during minus-strand synthesis, and subsequent second-strand transfer mediated by the R-U5 complementarity of the plus strand and the extended minus-strand DNA acceptor template. Mechanisms for R-U5-mediated second-strand transfer and its possible role in retrovirus replication and evolution are discussed. PMID:9499117

  8. Programmable molecular recognition based on the geometry of DNA nanostructures.

    PubMed

    Woo, Sungwook; Rothemund, Paul W K

    2011-07-10

    From ligand-receptor binding to DNA hybridization, molecular recognition plays a central role in biology. Over the past several decades, chemists have successfully reproduced the exquisite specificity of biomolecular interactions. However, engineering multiple specific interactions in synthetic systems remains difficult. DNA retains its position as the best medium with which to create orthogonal, isoenergetic interactions, based on the complementarity of Watson-Crick binding. Here we show that DNA can be used to create diverse bonds using an entirely different principle: the geometric arrangement of blunt-end stacking interactions. We show that both binary codes and shape complementarity can serve as a basis for such stacking bonds, and explore their specificity, thermodynamics and binding rules. Orthogonal stacking bonds were used to connect five distinct DNA origami. This work, which demonstrates how a single attractive interaction can be developed to create diverse bonds, may guide strategies for molecular recognition in systems beyond DNA nanostructures.

  9. The extent of sequence complementarity correlates with the potency of cellular miRNA-mediated restriction of HIV-1

    PubMed Central

    Houzet, Laurent; Klase, Zachary; Yeung, Man Lung; Wu, Annie; Le, Shu-Yun; Quiñones, Mariam; Jeang, Kuan-Teh

    2012-01-01

    MicroRNAs (miRNAs) are 22-nt non-coding RNAs involved in the regulation of cellular gene expression and potential cellular defense against viral infection. Using in silico analyses, we predicted target sites for 22 human miRNAs in the HIV genome. Transfection experiments using synthetic miRNAs showed that five of these miRNAs capably decreased HIV replication. Using one of these five miRNAs, human miR-326 as an example, we demonstrated that the degree of complementarity between the predicted viral sequence and cellular miR-326 correlates, in a Dicer-dependent manner, with the potency of miRNA-mediated restriction of viral replication. Antagomirs to miR-326 that knocked down this cell endogenous miRNA increased HIV-1 replication in cells, suggesting that miR-326 is physiologically functional in moderating HIV-1 replication in human cells. PMID:23042677

  10. Linear SFM: A hierarchical approach to solving structure-from-motion problems by decoupling the linear and nonlinear components

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Huang, Shoudong; Dissanayake, Gamini

    2018-07-01

    This paper presents a novel hierarchical approach to solving structure-from-motion (SFM) problems. The algorithm begins with small local reconstructions based on nonlinear bundle adjustment (BA). These are then joined in a hierarchical manner using a strategy that requires solving a linear least squares optimization problem followed by a nonlinear transform. The algorithm can handle ordered monocular and stereo image sequences. Two stereo images or three monocular images are adequate for building each initial reconstruction. The bulk of the computation involves solving a linear least squares problem and, therefore, the proposed algorithm avoids three major issues associated with most of the nonlinear optimization algorithms currently used for SFM: the need for a reasonably accurate initial estimate, the need for iterations, and the possibility of being trapped in a local minimum. Also, by summarizing all the original observations into the small local reconstructions with associated information matrices, the proposed Linear SFM manages to preserve all the information contained in the observations. The paper also demonstrates that the proposed problem formulation results in a sparse structure that leads to an efficient numerical implementation. The experimental results using publicly available datasets show that the proposed algorithm yields solutions that are very close to those obtained using a global BA starting with an accurate initial estimate. The C/C++ source code of the proposed algorithm is publicly available at https://github.com/LiangZhaoPKUImperial/LinearSFM.

  11. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  12. Numerical methods on some structured matrix algebra problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1996-06-01

    This proposal concerned the design, analysis, and implementation of serial and parallel algorithms for certain structured matrix algebra problems. It emphasized large order problems and so focused on methods that can be implemented efficiently on distributed-memory MIMD multiprocessors. Such machines supply the computing power and extensive memory demanded by the large order problems. We proposed to examine three classes of matrix algebra problems: the symmetric and nonsymmetric eigenvalue problems (especially the tridiagonal cases) and the solution of linear systems with specially structured coefficient matrices. As all of these are of practical interest, a major goal of this work was tomore » translate our research in linear algebra into useful tools for use by the computational scientists interested in these and related applications. Thus, in addition to software specific to the linear algebra problems, we proposed to produce a programming paradigm and library to aid in the design and implementation of programs for distributed-memory MIMD computers. We now report on our progress on each of the problems and on the programming tools.« less

  13. Determination of Nonlinear Stiffness Coefficients for Finite Element Models with Application to the Random Vibration Problem

    NASA Technical Reports Server (NTRS)

    Muravyov, Alexander A.

    1999-01-01

    In this paper, a method for obtaining nonlinear stiffness coefficients in modal coordinates for geometrically nonlinear finite-element models is developed. The method requires application of a finite-element program with a geometrically non- linear static capability. The MSC/NASTRAN code is employed for this purpose. The equations of motion of a MDOF system are formulated in modal coordinates. A set of linear eigenvectors is used to approximate the solution of the nonlinear problem. The random vibration problem of the MDOF nonlinear system is then considered. The solutions obtained by application of two different versions of a stochastic linearization technique are compared with linear and exact (analytical) solutions in terms of root-mean-square (RMS) displacements and strains for a beam structure.

  14. Attempts to locate residues in complementarity-determining regions of antibody combining sites that make contact with antigen.

    PubMed

    Kabat, E A; Wu, T T; Bilofsky, H

    1976-02-01

    From collected data on variable region sequences of heavy chains of immunoglobulins, the probability of random associations of any two amino-acid residues in the complementarity-determining segments was computed, and pairs of residues occurring significantly more frequently than expected were selected by computer. Significant associations between Phe 32 and Tyr 33, Phe 32 and Glu 35, and Tyr 33 and Glu 35 were found in six proteins, all of which were mouse myeloma proteins which bound phosphorylcholine (= phosphocholine). From the x-ray structure of McPC603, Tyr 33 and Glu 35 are contacting residues; a seventh phosphorylcholine-binding mouse myeloma protein also contained Phe 32 and Tyr 33 but position 35 had only been determined as Glx and thus this position had not been selected. Met 34 occurred in all seven phosphorylcholine-binding myeoma proteins but was also present at this position in 29 other proteins and thus was not selected; it is seen in the x-ray structure not to be a contacting residue. The role of Phe 32 is not obvious but it could have some conformational influence. A human phosphorylcholine-binding myeloma protien also had Phe, Tyr, and Met at positions 32, 33, and 34, but had Asp instead of Glu at position 35 and showed a lower binding constant. The ability to use sequence data to locate residues in complementarity-determing segments making contact with antigenic determinants and those playing essentially a structural role would contribute substantially to the understanding of antibody specificity.

  15. Investigating functional redundancy versus complementarity in Hawaiian herbivorous coral reef fishes.

    PubMed

    Kelly, Emily L A; Eynaud, Yoan; Clements, Samantha M; Gleason, Molly; Sparks, Russell T; Williams, Ivor D; Smith, Jennifer E

    2016-12-01

    Patterns of species resource use provide insight into the functional roles of species and thus their ecological significance within a community. The functional role of herbivorous fishes on coral reefs has been defined through a variety of methods, but from a grazing perspective, less is known about the species-specific preferences of herbivores on different groups of reef algae and the extent of dietary overlap across an herbivore community. Here, we quantified patterns of redundancy and complementarity in a highly diverse community of herbivores at a reef on Maui, Hawaii, USA. First, we tracked fish foraging behavior in situ to record bite rate and type of substrate bitten. Second, we examined gut contents of select herbivorous fishes to determine consumption at a finer scale. Finally, we placed foraging behavior in the context of resource availability to determine how fish selected substrate type. All species predominantly (73-100 %) foraged on turf algae, though there were differences among the types of macroalgae and other substrates bitten. Increased resolution via gut content analysis showed the composition of turf algae consumed by fishes differed across herbivore species. Consideration of foraging behavior by substrate availability revealed 50 % of herbivores selected for turf as opposed to other substrate types, but overall, there were variable foraging portfolios across all species. Through these three methods of investigation, we found higher complementarity among herbivorous fishes than would be revealed using a single metric. These results suggest differences across species in the herbivore "rain of bites" that graze and shape benthic community composition.

  16. Phosphorus acquisition by citrate- and phytase-exuding Nicotiana tabacum plant mixtures depends on soil phosphorus availability and root intermingling.

    PubMed

    Giles, Courtney D; Richardson, Alan E; Cade-Menun, Barbara J; Mezeli, Malika M; Brown, Lawrie K; Menezes-Blackburn, Daniel; Darch, Tegan; Blackwell, Martin Sa; Shand, Charles A; Stutter, Marc I; Wendler, Renate; Cooper, Patricia; Lumsdon, David G; Wearing, Catherine; Zhang, Hao; Haygarth, Philip M; George, Timothy S

    2018-03-02

    Citrate and phytase root exudates contribute to improved phosphorus (P) acquisition efficiency in Nicotiana tabacum (tobacco) when both exudates are produced in a P deficient soil. To test the importance of root intermingling in the interaction of citrate and phytase exudates, Nicotiana tabacum plant-lines with constitutive expression of heterologous citrate (Cit) or fungal phytase (Phy) exudation traits were grown under two root treatments (roots separated or intermingled) and in two soils with contrasting soil P availability. Complementarity of plant mixtures varying in citrate efflux rate and mobility of the expressed phytase in soil was determined based on plant biomass and P accumulation. Soil P composition was evaluated using solution 31 P NMR spectroscopy. In the soil with limited available P, positive complementarity occurred in Cit+Phy mixtures with roots intermingled. Root separation eliminated positive interactions in mixtures expressing the less mobile phytase (Aspergillus niger PhyA) whereas positive complementarity persisted in mixtures that expressed the more mobile phytase (Peniophora lycii PhyA). Soils from Cit+Phy mixtures contained less inorganic P and more organic P compared to monocultures. Exudate-specific strategies for the acquisition of soil P were most effective in P-limited soil and depended on citrate efflux rate and the relative mobility of the expressed phytase in soil. Plant growth and soil P utilization in plant systems with complementary exudation strategies are expected to be greatest where exudates persist in soil and are expressed synchronously in space and time. This article is protected by copyright. All rights reserved.

  17. Ant-mediated ecosystem processes are driven by trophic community structure but mainly by the environment.

    PubMed

    Salas-Lopez, Alex; Mickal, Houadria; Menzel, Florian; Orivel, Jérôme

    2017-01-01

    The diversity and functional identity of organisms are known to be relevant to the maintenance of ecosystem processes but can be variable in different environments. Particularly, it is uncertain whether ecosystem processes are driven by complementary effects or by dominant groups of species. We investigated how community structure (i.e., the diversity and relative abundance of biological entities) explains the community-level contribution of Neotropical ant communities to different ecosystem processes in different environments. Ants were attracted with food resources representing six ant-mediated ecosystem processes in four environments: ground and vegetation strata in cropland and forest habitats. The exploitation frequencies of the baits were used to calculate the taxonomic and trophic structures of ant communities and their contribution to ecosystem processes considered individually or in combination (i.e., multifunctionality). We then investigated whether community structure variables could predict ecosystem processes and whether such relationships were affected by the environment. We found that forests presented a greater biodiversity and trophic complementarity and lower dominance than croplands, but this did not affect ecosystem processes. In contrast, trophic complementarity was greater on the ground than on vegetation and was followed by greater resource exploitation levels. Although ant participation in ecosystem processes can be predicted by means of trophic-based indices, we found that variations in community structure and performance in ecosystem processes were best explained by environment. We conclude that determining the extent to which the dominance and complementarity of communities affect ecosystem processes in different environments requires a better understanding of resource availability to different species.

  18. Improving protein-protein interaction prediction using evolutionary information from low-quality MSAs.

    PubMed

    Várnai, Csilla; Burkoff, Nikolas S; Wild, David L

    2017-01-01

    Evolutionary information stored in multiple sequence alignments (MSAs) has been used to identify the interaction interface of protein complexes, by measuring either co-conservation or co-mutation of amino acid residues across the interface. Recently, maximum entropy related correlated mutation measures (CMMs) such as direct information, decoupling direct from indirect interactions, have been developed to identify residue pairs interacting across the protein complex interface. These studies have focussed on carefully selected protein complexes with large, good-quality MSAs. In this work, we study protein complexes with a more typical MSA consisting of fewer than 400 sequences, using a set of 79 intramolecular protein complexes. Using a maximum entropy based CMM at the residue level, we develop an interface level CMM score to be used in re-ranking docking decoys. We demonstrate that our interface level CMM score compares favourably to the complementarity trace score, an evolutionary information-based score measuring co-conservation, when combined with the number of interface residues, a knowledge-based potential and the variability score of individual amino acid sites. We also demonstrate, that, since co-mutation and co-complementarity in the MSA contain orthogonal information, the best prediction performance using evolutionary information can be achieved by combining the co-mutation information of the CMM with co-conservation information of a complementarity trace score, predicting a near-native structure as the top prediction for 41% of the dataset. The method presented is not restricted to small MSAs, and will likely improve interface prediction also for complexes with large and good-quality MSAs.

  19. Exploring the Roles of Nucleobase Desolvation and Shape Complementarity during the Misreplication of O6-Methylguanine

    PubMed Central

    Chavarria, Delia; Ramos-Serrano, Andrea; Hirao, Ichiro; Berdis, Anthony J.

    2011-01-01

    O6-methylguanine is a miscoding DNA lesion arising from the alkylation of guanine. This report uses the bacteriophage T4 DNA polymerase as a model to probe the roles hydrogen-bonding interactions, shape/size, and nucleobase desolvation during the replication of this miscoding lesion. This was accomplished by using transient kinetic techniques to monitor the kinetic parameters for incorporating and extending natural and non-natural nucleotides. In general, the efficiency of nucleotide incorporation does not depend on the hydrogen-bonding potential of the incoming nucleotide. Instead, nucleobase hydrophobicity and shape complementarity appear to be the preeminent factors controlling nucleotide incorporation. In addition, shape complementarity plays a large role in controlling the extension of various mispairs containing O6-methylguanine. This is evident as the rate constants for extension correlate with proper interglycosyl distances and symmetry between the base angles of the formed mispair. Base pairs not conforming to an acceptable geometry within the polymerase’s active site are refractory to elongation and are processed via exonuclease proofreading. The collective data set encompassing nucleotide incorporation, extension, and excision is used to generate a model accounting for the mutagenic potential of O6-methylguanine observed in vivo. In addition, kinetic studies monitoring the incorporation and extension of non-natural nucleotides identified an analog that displays high selectivity for incorporation opposite O6-methylguanine compared to unmodified purines. The unusual selectivity of this analog for replicating damaged DNA provides a novel biochemical tool to study translesion DNA synthesis. PMID:21819995

  20. Root foraging elicits niche complementarity-dependent yield advantage in the ancient ‘three sisters’ (maize/bean/squash) polyculture

    PubMed Central

    Zhang, Chaochun; Postma, Johannes A.; York, Larry M.; Lynch, Jonathan P.

    2014-01-01

    Background and Aims Since ancient times in the Americas, maize, bean and squash have been grown together in a polyculture known as the ‘three sisters’. This polyculture and its maize/bean variant have greater yield than component monocultures on a land-equivalent basis. This study shows that below-ground niche complementarity may contribute to this yield advantage. Methods Monocultures and polycultures of maize, bean and squash were grown in two seasons in field plots differing in nitrogen (N) and phosphorus (P) availability. Root growth patterns of individual crops and entire polycultures were determined using a modified DNA-based technique to discriminate roots of different species. Key Results The maize/bean/squash and maize/bean polycultures had greater yield and biomass production on a land-equivalent basis than the monocultures. Increased biomass production was largely caused by a complementarity effect rather than a selection effect. The differences in root crown architecture and vertical root distribution among the components of the ‘three sisters’ suggest that these species have different, possibly complementary, nutrient foraging strategies. Maize foraged relatively shallower, common bean explored the vertical soil profile more equally, while the root placement of squash depended on P availability. The density of lateral root branching was significantly greater for all species in the polycultures than in the monocultures. Conclusions It is concluded that species differences in root foraging strategies increase total soil exploration, with consequent positive effects on the growth and yield of these ancient polycultures. PMID:25274551

  1. Sparse Substring Pattern Set Discovery Using Linear Programming Boosting

    NASA Astrophysics Data System (ADS)

    Kashihara, Kazuaki; Hatano, Kohei; Bannai, Hideo; Takeda, Masayuki

    In this paper, we consider finding a small set of substring patterns which classifies the given documents well. We formulate the problem as 1 norm soft margin optimization problem where each dimension corresponds to a substring pattern. Then we solve this problem by using LPBoost and an optimal substring discovery algorithm. Since the problem is a linear program, the resulting solution is likely to be sparse, which is useful for feature selection. We evaluate the proposed method for real data such as movie reviews.

  2. Domain decomposition in time for PDE-constrained optimization

    DOE PAGES

    Barker, Andrew T.; Stoll, Martin

    2015-08-28

    Here, PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.

  3. High Order Finite Difference Methods, Multidimensional Linear Problems and Curvilinear Coordinates

    NASA Technical Reports Server (NTRS)

    Nordstrom, Jan; Carpenter, Mark H.

    1999-01-01

    Boundary and interface conditions are derived for high order finite difference methods applied to multidimensional linear problems in curvilinear coordinates. The boundary and interface conditions lead to conservative schemes and strict and strong stability provided that certain metric conditions are met.

  4. A high-accuracy optical linear algebra processor for finite element applications

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  5. Old Wine in New Bottles: Quantum Theory in Historical Perspective.

    ERIC Educational Resources Information Center

    Bent, Henry A.

    1984-01-01

    Discusses similarities between chemistry and three central concepts of quantum physics: (1) stationary states; (2) wave functions; and (3) complementarity. Based on these and other similarities, it is indicated that quantum physics is a chemical physics. (JN)

  6. A comparison of Heuristic method and Llewellyn’s rules for identification of redundant constraints

    NASA Astrophysics Data System (ADS)

    Estiningsih, Y.; Farikhin; Tjahjana, R. H.

    2018-03-01

    Important techniques in linear programming is modelling and solving practical optimization. Redundant constraints are consider for their effects on general linear programming problems. Identification and reduce redundant constraints are for avoidance of all the calculations associated when solving an associated linear programming problems. Many researchers have been proposed for identification redundant constraints. This paper a compararison of Heuristic method and Llewellyn’s rules for identification of redundant constraints.

  7. Discrete Methods and their Applications

    DTIC Science & Technology

    1993-02-03

    problem of finding all near-optimal solutions to a linear program. In paper [18], we give a brief and elementary proof of a result of Hoffman [1952) about...relies only on linear programming duality; second, we obtain geometric and algebraic representations of the bounds that are determined explicitly in...same. We have studied the problem of finding the minimum n such that a given unit interval graph is an n--graph. A linear time algorithm to compute

  8. On optimal control of linear systems in the presence of multiplicative noise

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1976-01-01

    This correspondence considers the problem of optimal regulator design for discrete time linear systems subjected to white state-dependent and control-dependent noise in addition to additive white noise in the input and the observations. A pseudo-deterministic problem is first defined in which multiplicative and additive input disturbances are present, but noise-free measurements of the complete state vector are available. This problem is solved via discrete dynamic programming. Next is formulated the problem in which the number of measurements is less than that of the state variables and the measurements are contaminated with state-dependent noise. The inseparability of control and estimation is brought into focus, and an 'enforced separation' solution is obtained via heuristic reasoning in which the control gains are shown to be the same as those in the pseudo-deterministic problem. An optimal linear state estimator is given in order to implement the controller.

  9. Simple Test Functions in Meshless Local Petrov-Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.

    2016-01-01

    Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.

  10. The linear Boltzmann equation in slab geometry - Development and verification of a reliable and efficient solution

    NASA Technical Reports Server (NTRS)

    Stamnes, K.; Lie-Svendsen, O.; Rees, M. H.

    1991-01-01

    The linear Boltzmann equation can be cast in a form mathematically identical to the radiation-transport equation. A multigroup procedure is used to reduce the energy (or velocity) dependence of the transport equation to a series of one-speed problems. Each of these one-speed problems is equivalent to the monochromatic radiative-transfer problem, and existing software is used to solve this problem in slab geometry. The numerical code conserves particles in elastic collisions. Generic examples are provided to illustrate the applicability of this approach. Although this formalism can, in principle, be applied to a variety of test particle or linearized gas dynamics problems, it is particularly well-suited to study the thermalization of suprathermal particles interacting with a background medium when the thermal motion of the background cannot be ignored. Extensions of the formalism to include external forces and spherical geometry are also feasible.

  11. A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.

    PubMed

    Khelifi, Lazhar; Mignotte, Max

    2017-08-01

    Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.

  12. The interface between population and development models, plans and policies.

    PubMed

    Cohen, S I

    1989-01-01

    Scant attention has been given to integrating policy issues in population economics and development economics into more general frameworks. Reviewing the state of the art, this paper examines problems in incorporating population economics variables in development planning. Specifically, conceptual issues in defining population economics variables, modelling relationships between them, and operationalizing frameworks for decision making are explored with hopes of yielding tentative solutions. Several controversial policy issues affecting the development process are also examined in the closing section. 2 of these issues would be the social efficiency of interventions with fertility, and of resource allocations to human development. The effective combination between agriculture and industry in promoting and equitably distributing income growth among earning population groups is a 3rd issue of consideration. Finally, the paper looks at the optimal combination between transfer payments and provisions in kind in guaranteeing minimum consumption needs for poverty groups. Overall, the paper finds significant obstacles to refining the integration of population economics and development policy. Namely, integrating time and place dimensions in classifying people by activity, operationalizing population economics variable models to meet the practical situations of planning and programs, and assessing conflicts and complementarities between alternative policies pose problems. 2 scholarly comments follow the main body of the paper.

  13. Peace studies and conflict resolution: the need for transdisciplinarity.

    PubMed

    Galtung, Johan

    2010-02-01

    Peace studies seeks to understand the negation of violence through conflict transformation, cooperation and harmony by drawing from many disciplines, including psychology, sociology and anthropology, political science, economics, international relations, international law and history. This raises the problem of the complementarity, coexistence and integration of different systems of knowledge. In fact, all of the human and social sciences are products of the post-Westphalian state system and so reify the state and its internal and international system and focus on this as the main source of political conflict. Conflicts, however, can arise from other distinctions involving gender, generation, race, class and so on. To contribute to peace building and conflict resolution, the social sciences must be globalized, developing theories that address conflicts at the levels of interpersonal interaction (micro), within countries (meso), between nations (macro ), and between whole regions or civilizations (mega). Psychiatry and the "psy" disciplines can contribute to peace building and conflict resolution through understanding the interactions between processes at each of these levels and the mental health or illness of individuals.

  14. Achieving quantum precision limit in adaptive qubit state tomography

    NASA Astrophysics Data System (ADS)

    Hou, Zhibo; Zhu, Huangjun; Xiang, Guo-Yong; Li, Chuan-Feng; Guo, Guang-Can

    2016-02-01

    The precision limit in quantum state tomography is of great interest not only to practical applications but also to foundational studies. However, little is known about this subject in the multiparameter setting even theoretically due to the subtle information trade-off among incompatible observables. In the case of a qubit, the theoretic precision limit was determined by Hayashi as well as Gill and Massar, but attaining the precision limit in experiments has remained a challenging task. Here we report the first experiment that achieves this precision limit in adaptive quantum state tomography on optical polarisation qubits. The two-step adaptive strategy used in our experiment is very easy to implement in practice. Yet it is surprisingly powerful in optimising most figures of merit of practical interest. Our study may have significant implications for multiparameter quantum estimation problems, such as quantum metrology. Meanwhile, it may promote our understanding about the complementarity principle and uncertainty relations from the information theoretic perspective.

  15. A scientific program for infrared, submillimeter and radio astronomy from space: A report by the Management Operations Working Group

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Important and fundamental scientific progress can be attained through space observations in the wavelengths longward of 1 micron. The formation of galaxies, stars, and planets, the origin of quasars and the nature of active galactic nuclei, the large scale structure of the Universe, and the problem of the missing mass, are among the major scientific issues that can be addressed by these observations. Significant advances in many areas of astrophysics can be made over the next 20 years by implementing the outlined program. This program combines large observatories with smaller projects to create an overall scheme that emphasized complementarity and synergy, advanced technology, community support and development, and the training of the next generation of scientists. Key aspects of the program include: the Space Infrared Telescope Facility; the Stratospheric Observatory for Infrared Astronomy; a robust program of small missions; and the creation of the technology base for future major observatories.

  16. Holding the Hunger Games Hostage at the Gym: An Evaluation of Temptation Bundling

    PubMed Central

    Milkman, Katherine L.; Minson, Julia A.; Volpp, Kevin G. M.

    2014-01-01

    We introduce and evaluate the effectiveness of temptation bundling—a method for simultaneously tackling two types of self-control problems by harnessing consumption complementarities. We describe a field experiment measuring the impact of bundling instantly gratifying but guilt-inducing “want” experiences (enjoying page-turner audiobooks) with valuable “should” behaviors providing delayed rewards (exercising). We explore whether such bundles increase should behaviors and whether people would pay to create these restrictive bundles. Participants were randomly assigned to a full treatment condition with gym-only access to tempting audio novels, an intermediate treatment involving encouragement to restrict audiobook enjoyment to the gym, or a control condition. Initially, full and intermediate treatment participants visited the gym 51% and 29% more frequently, respectively, than control participants, but treatment effects declined over time (particularly following Thanksgiving). After the study, 61% of participants opted to pay to have gym-only access to iPods containing tempting audiobooks, suggesting demand for this commitment device. PMID:25843979

  17. Automatic forest-fire measuring using ground stations and Unmanned Aerial Systems.

    PubMed

    Martínez-de Dios, José Ramiro; Merino, Luis; Caballero, Fernando; Ollero, Anibal

    2011-01-01

    This paper presents a novel system for automatic forest-fire measurement using cameras distributed at ground stations and mounted on Unmanned Aerial Systems (UAS). It can obtain geometrical measurements of forest fires in real-time such as the location and shape of the fire front, flame height and rate of spread, among others. Measurement of forest fires is a challenging problem that is affected by numerous potential sources of error. The proposed system addresses them by exploiting the complementarities between infrared and visual cameras located at different ground locations together with others onboard Unmanned Aerial Systems (UAS). The system applies image processing and geo-location techniques to obtain forest-fire measurements individually from each camera and then integrates the results from all the cameras using statistical data fusion techniques. The proposed system has been extensively tested and validated in close-to-operational conditions in field fire experiments with controlled safety conditions carried out in Portugal and Spain from 2001 to 2006.

  18. Automatic Forest-Fire Measuring Using Ground Stations and Unmanned Aerial Systems

    PubMed Central

    Martínez-de Dios, José Ramiro; Merino, Luis; Caballero, Fernando; Ollero, Anibal

    2011-01-01

    This paper presents a novel system for automatic forest-fire measurement using cameras distributed at ground stations and mounted on Unmanned Aerial Systems (UAS). It can obtain geometrical measurements of forest fires in real-time such as the location and shape of the fire front, flame height and rate of spread, among others. Measurement of forest fires is a challenging problem that is affected by numerous potential sources of error. The proposed system addresses them by exploiting the complementarities between infrared and visual cameras located at different ground locations together with others onboard Unmanned Aerial Systems (UAS). The system applies image processing and geo-location techniques to obtain forest-fire measurements individually from each camera and then integrates the results from all the cameras using statistical data fusion techniques. The proposed system has been extensively tested and validated in close-to-operational conditions in field fire experiments with controlled safety conditions carried out in Portugal and Spain from 2001 to 2006. PMID:22163958

  19. A new neural network model for solving random interval linear programming problems.

    PubMed

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Train repathing in emergencies based on fuzzy linear programming.

    PubMed

    Meng, Xuelei; Cui, Bingmou

    2014-01-01

    Train pathing is a typical problem which is to assign the train trips on the sets of rail segments, such as rail tracks and links. This paper focuses on the train pathing problem, determining the paths of the train trips in emergencies. We analyze the influencing factors of train pathing, such as transferring cost, running cost, and social adverse effect cost. With the overall consideration of the segment and station capability constraints, we build the fuzzy linear programming model to solve the train pathing problem. We design the fuzzy membership function to describe the fuzzy coefficients. Furthermore, the contraction-expansion factors are introduced to contract or expand the value ranges of the fuzzy coefficients, coping with the uncertainty of the value range of the fuzzy coefficients. We propose a method based on triangular fuzzy coefficient and transfer the train pathing (fuzzy linear programming model) to a determinate linear model to solve the fuzzy linear programming problem. An emergency is supposed based on the real data of the Beijing-Shanghai Railway. The model in this paper was solved and the computation results prove the availability of the model and efficiency of the algorithm.

  1. A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov

    NASA Astrophysics Data System (ADS)

    Greenough, J. A.; Rider, W. J.

    2004-05-01

    A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the "peak" shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are equal for both numerical methods, then PLMDE uniformly produces lower errors than WENO for the fixed computation cost on the test problems considered here.

  2. Solving Fuzzy Optimization Problem Using Hybrid Ls-Sa Method

    NASA Astrophysics Data System (ADS)

    Vasant, Pandian

    2011-06-01

    Fuzzy optimization problem has been one of the most and prominent topics inside the broad area of computational intelligent. It's especially relevant in the filed of fuzzy non-linear programming. It's application as well as practical realization can been seen in all the real world problems. In this paper a large scale non-linear fuzzy programming problem has been solved by hybrid optimization techniques of Line Search (LS), Simulated Annealing (SA) and Pattern Search (PS). As industrial production planning problem with cubic objective function, 8 decision variables and 29 constraints has been solved successfully using LS-SA-PS hybrid optimization techniques. The computational results for the objective function respect to vagueness factor and level of satisfaction has been provided in the form of 2D and 3D plots. The outcome is very promising and strongly suggests that the hybrid LS-SA-PS algorithm is very efficient and productive in solving the large scale non-linear fuzzy programming problem.

  3. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  4. Anomaly General Circulation Models.

    NASA Astrophysics Data System (ADS)

    Navarra, Antonio

    The feasibility of the anomaly model is assessed using barotropic and baroclinic models. In the barotropic case, both a stationary and a time-dependent model has been formulated and constructed, whereas only the stationary, linear case is considered in the baroclinic case. Results from the barotropic model indicate that a relation between the stationary solution and the time-averaged non-linear solution exists. The stationary linear baroclinic solution can therefore be considered with some confidence. The linear baroclinic anomaly model poses a formidable mathematical problem because it is necessary to solve a gigantic linear system to obtain the solution. A new method to find solution of large linear system, based on a projection on the Krylov subspace is shown to be successful when applied to the linearized baroclinic anomaly model. The scheme consists of projecting the original linear system on the Krylov subspace, thereby reducing the dimensionality of the matrix to be inverted to obtain the solution. With an appropriate setting of the damping parameters, the iterative Krylov method reaches a solution even using a Krylov subspace ten times smaller than the original space of the problem. This generality allows the treatment of the important problem of linear waves in the atmosphere. A larger class (nonzonally symmetric) of basic states can now be treated for the baroclinic primitive equations. These problem leads to large unsymmetrical linear systems of order 10000 and more which can now be successfully tackled by the Krylov method. The (R7) linear anomaly model is used to investigate extensively the linear response to equatorial and mid-latitude prescribed heating. The results indicate that the solution is deeply affected by the presence of the stationary waves in the basic state. The instability of the asymmetric flows, first pointed out by Simmons et al. (1983), is active also in the baroclinic case. However, the presence of baroclinic processes modifies the dominant response. The most sensitive areas are identified; they correspond to north Japan, the Pole and Greenland regions. A limited set of higher resolution (R15) experiments indicate that this situation is still present and enhanced at higher resolution. The linear anomaly model is also applied to a realistic case. (Abstract shortened with permission of author.).

  5. Nonlinearity measure and internal model control based linearization in anti-windup design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perev, Kamen

    2013-12-18

    This paper considers the problem of internal model control based linearization in anti-windup design. The nonlinearity measure concept is used for quantifying the control system degree of nonlinearity. The linearizing effect of a modified internal model control structure is presented by comparing the nonlinearity measures of the open-loop and closed-loop systems. It is shown that the linearization properties are improved by increasing the control system local feedback gain. However, it is emphasized that at the same time the stability of the system deteriorates. The conflicting goals of stability and linearization are resolved by solving the design problem in different frequencymore » ranges.« less

  6. Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables

    NASA Astrophysics Data System (ADS)

    Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.

    2018-02-01

    In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.

  7. Generalised Assignment Matrix Methodology in Linear Programming

    ERIC Educational Resources Information Center

    Jerome, Lawrence

    2012-01-01

    Discrete Mathematics instructors and students have long been struggling with various labelling and scanning algorithms for solving many important problems. This paper shows how to solve a wide variety of Discrete Mathematics and OR problems using assignment matrices and linear programming, specifically using Excel Solvers although the same…

  8. Numerical methods in Markov chain modeling

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef; Stewart, William J.

    1989-01-01

    Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.

  9. Image reconstruction

    NASA Astrophysics Data System (ADS)

    Vasilenko, Georgii Ivanovich; Taratorin, Aleksandr Markovich

    Linear, nonlinear, and iterative image-reconstruction (IR) algorithms are reviewed. Theoretical results are presented concerning controllable linear filters, the solution of ill-posed functional minimization problems, and the regularization of iterative IR algorithms. Attention is also given to the problem of superresolution and analytical spectrum continuation, the solution of the phase problem, and the reconstruction of images distorted by turbulence. IR in optical and optical-digital systems is discussed with emphasis on holographic techniques.

  10. A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints

    NASA Technical Reports Server (NTRS)

    Hanson, R. J.; Krogh, Fred T.

    1992-01-01

    A new algorithm for solving nonlinear least-squares and nonlinear equation problems is proposed which is based on approximating the nonlinear functions using the quadratic-tensor model by Schnabel and Frank. The algorithm uses a trust region defined by a box containing the current values of the unknowns. The algorithm is found to be effective for problems with linear constraints and dense Jacobian matrices.

  11. Enhanced algorithms for stochastic programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishna, Alamuru S.

    1993-09-01

    In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less

  12. The principle of superposition and its application in ground-water hydraulics

    USGS Publications Warehouse

    Reilly, T.E.; Franke, O.L.; Bennett, G.D.

    1984-01-01

    The principle of superposition, a powerful methematical technique for analyzing certain types of complex problems in many areas of science and technology, has important application in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that solutions to individual problems can be added together to obtain solutions to complex problems. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to groundwater hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader. (USGS)

  13. Weak stability of the plasma-vacuum interface problem

    NASA Astrophysics Data System (ADS)

    Catania, Davide; D'Abbicco, Marcello; Secchi, Paolo

    2016-09-01

    We consider the free boundary problem for the two-dimensional plasma-vacuum interface in ideal compressible magnetohydrodynamics (MHD). In the plasma region, the flow is governed by the usual compressible MHD equations, while in the vacuum region we consider the Maxwell system for the electric and the magnetic fields. At the free interface, driven by the plasma velocity, the total pressure is continuous and the magnetic field on both sides is tangent to the boundary. We study the linear stability of rectilinear plasma-vacuum interfaces by computing the Kreiss-Lopatinskiĭ determinant of an associated linearized boundary value problem. Apart from possible resonances, we obtain that the piecewise constant plasma-vacuum interfaces are always weakly linearly stable, independently of the size of tangential velocity, magnetic and electric fields on both sides of the characteristic discontinuity. We also prove that solutions to the linearized problem obey an energy estimate with a loss of regularity with respect to the source terms, both in the interior domain and on the boundary, due to the failure of the uniform Kreiss-Lopatinskiĭ condition, as the Kreiss-Lopatinskiĭ determinant associated with this linearized boundary value problem has roots on the boundary of the frequency space. In the proof of the a priori estimates, a crucial part is played by the construction of symmetrizers for a reduced differential system, which has poles at which the Kreiss-Lopatinskiĭ condition may fail simultaneously.

  14. Nonlinearization and waves in bounded media: old wine in a new bottle

    NASA Astrophysics Data System (ADS)

    Mortell, Michael P.; Seymour, Brian R.

    2017-02-01

    We consider problems such as a standing wave in a closed straight tube, a self-sustained oscillation, damped resonance, evolution of resonance and resonance between concentric spheres. These nonlinear problems, and other similar ones, have been solved by a variety of techniques when it is seen that linear theory fails. The unifying approach given here is to initially set up the appropriate linear difference equation, where the difference is the linear travel time. When the linear travel time is replaced by a corrected nonlinear travel time, the nonlinear difference equation yields the required solution.

  15. Probabilistic dual heuristic programming-based adaptive critic

    NASA Astrophysics Data System (ADS)

    Herzallah, Randa

    2010-02-01

    Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.

  16. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    PubMed

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  17. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    PubMed Central

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  18. COMPLEMENTARITY OF ECOLOGICAL GOAL FUNCTIONS

    EPA Science Inventory

    This paper summarizes, in the framework of network environ analysis, a set of analyses of energy-matter flow and storage in steady state systems. The network perspective is used to codify and unify ten ecological orientors or external principles: maximum power (Lotka), maximum st...

  19. SUBOPT: A CAD program for suboptimal linear regulators

    NASA Technical Reports Server (NTRS)

    Fleming, P. J.

    1985-01-01

    An interactive software package which provides design solutions for both standard linear quadratic regulator (LQR) and suboptimal linear regulator problems is described. Intended for time-invariant continuous systems, the package is easily modified to include sampled-data systems. LQR designs are obtained by established techniques while the large class of suboptimal problems containing controller and/or performance index options is solved using a robust gradient minimization technique. Numerical examples demonstrate features of the package and recent developments are described.

  20. An Extended Microcomputer-Based Network Optimization Package.

    DTIC Science & Technology

    1982-10-01

    Analysis, Laxenberq, Austria, 1981, pp. 781-808. 9. Anton , H., Elementary Linear Algebra , John Wiley & Sons, New York, 1977. 10. Koopmans, T. C...fCaRUlue do leVee. aide It 001100"M OW eedea9f’ OF Nooke~e Network, generalized network, microcomputer, optimization, network with gains, linear ...Oboe &111111041 network problem, in turn, can be viewed as a specialization of a linear programuing problem having at most two non-zero entries in each

  1. A Linear Algebraic Approach to Teaching Interpolation

    ERIC Educational Resources Information Center

    Tassa, Tamir

    2007-01-01

    A novel approach for teaching interpolation in the introductory course in numerical analysis is presented. The interpolation problem is viewed as a problem in linear algebra, whence the various forms of interpolating polynomial are seen as different choices of a basis to the subspace of polynomials of the corresponding degree. This approach…

  2. SOME PROBLEMS IN THE CONSTRUCTION OF AN ELECTRON LINEAR ACCELERATOR (in Dutch)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verhaeghe, J.; Vanhuyse, V.; Van Leuven, P.

    1959-01-01

    Special problems encountered in the construction of the electron linear accelerator of the Natuurkundig Laboratorium der Rijksuniversiteit of Ghent are discussed. The subjects considered are magnetic focusing, magnetic screening of the electron gun cathode, abnormal attenuation-multipactor effects, and electron energy control. (J.S.R.)

  3. Technology, Linear Equations, and Buying a Car.

    ERIC Educational Resources Information Center

    Sandefur, James T.

    1992-01-01

    Discusses the use of technology in solving compound interest-rate problems that can be modeled by linear relationships. Uses a graphing calculator to solve the specific problem of determining the amount of money that can be borrowed to buy a car for a given monthly payment and interest rate. (MDH)

  4. The mean-square error optimal linear discriminant function and its application to incomplete data vectors

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1979-01-01

    In many pattern recognition problems, data vectors are classified although one or more of the data vector elements are missing. This problem occurs in remote sensing when the ground is obscured by clouds. Optimal linear discrimination procedures for classifying imcomplete data vectors are discussed.

  5. Finding Optimal Gains In Linear-Quadratic Control Problems

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Scheid, Robert E., Jr.

    1990-01-01

    Analytical method based on Volterra factorization leads to new approximations for optimal control gains in finite-time linear-quadratic control problem of system having infinite number of dimensions. Circumvents need to analyze and solve Riccati equations and provides more transparent connection between dynamics of system and optimal gain.

  6. A Non-linear Geodetic Data Inversion Using ABIC for Slip Distribution on a Fault With an Unknown dip Angle

    NASA Astrophysics Data System (ADS)

    Fukahata, Y.; Wright, T. J.

    2006-12-01

    We developed a method of geodetic data inversion for slip distribution on a fault with an unknown dip angle. When fault geometry is unknown, the problem of geodetic data inversion is non-linear. A common strategy for obtaining slip distribution is to first determine the fault geometry by minimizing the square misfit under the assumption of a uniform slip on a rectangular fault, and then apply the usual linear inversion technique to estimate a slip distribution on the determined fault. It is not guaranteed, however, that the fault determined under the assumption of a uniform slip gives the best fault geometry for a spatially variable slip distribution. In addition, in obtaining a uniform slip fault model, we have to simultaneously determine the values of the nine mutually dependent parameters, which is a highly non-linear, complicated process. Although the inverse problem is non-linear for cases with unknown fault geometries, the non-linearity of the problems is actually weak, when we can assume the fault surface to be flat. In particular, when a clear fault trace is observed on the EarthOs surface after an earthquake, we can precisely estimate the strike and the location of the fault. In this case only the dip angle has large ambiguity. In geodetic data inversion we usually need to introduce smoothness constraints in order to compromise reciprocal requirements for model resolution and estimation errors in a natural way. Strictly speaking, the inverse problem with smoothness constraints is also non-linear, even if the fault geometry is known. The non-linearity has been dissolved by introducing AkaikeOs Bayesian Information Criterion (ABIC), with which the optimal value of the relative weight of observed data to smoothness constraints is objectively determined. In this study, using ABIC in determining the optimal dip angle, we dissolved the non-linearity of the inverse problem. We applied the method to the InSAR data of the 1995 Dinar, Turkey earthquake and obtained a much shallower dip angle than before.

  7. Multitrophic functional diversity predicts ecosystem functioning in experimental assemblages of estuarine consumers.

    PubMed

    Lefcheck, Jonathan S; Duffy, J Emmett

    2015-11-01

    The use of functional traits to explain how biodiversity affects ecosystem functioning has attracted intense interest, yet few studies have a priori altered functional diversity, especially in multitrophic communities. Here, we manipulated multivariate functional diversity of estuarine grazers and predators within multiple levels of species richness to test how species richness and functional diversity predicted ecosystem functioning in a multitrophic food web. Community functional diversity was a better predictor than species richness for the majority of ecosystem properties, based on generalized linear mixed-effects models. Combining inferences from eight traits into a single multivariate index increased prediction accuracy of these models relative to any individual trait. Structural equation modeling revealed that functional diversity of both grazers and predators was important in driving final biomass within trophic levels, with stronger effects observed for predators. We also show that different species drove different ecosystem responses, with evidence for both sampling effects and complementarity. Our study extends experimental investigations of functional trait diversity to a multilevel food web, and demonstrates that functional diversity can be more accurate and effective than species richness in predicting community biomass in a food web context.

  8. Convergence of a sequence of dual variables at the solution of a completely degenerate problem of linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dikin, I.

    1994-12-31

    We survey results about the convergence of the primal affine scaling method at solutions of a completely degenerate problem of linear programming. Moreover we are studying the case when a next approximation is on the boundary of the affine scaling ellipsoid. Convergence of successive approximation to an interior point u of the solution for the dual problem is proved. Coordinates of the vector u are determined only by the input data of the problem; they do not depend of the choice of the starting point.

  9. Interior point techniques for LP and NLP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evtushenko, Y.

    By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.

  10. Kernel-imbedded Gaussian processes for disease classification using microarray gene expression data

    PubMed Central

    Zhao, Xin; Cheung, Leo Wang-Kit

    2007-01-01

    Background Designing appropriate machine learning methods for identifying genes that have a significant discriminating power for disease outcomes has become more and more important for our understanding of diseases at genomic level. Although many machine learning methods have been developed and applied to the area of microarray gene expression data analysis, the majority of them are based on linear models, which however are not necessarily appropriate for the underlying connection between the target disease and its associated explanatory genes. Linear model based methods usually also bring in false positive significant features more easily. Furthermore, linear model based algorithms often involve calculating the inverse of a matrix that is possibly singular when the number of potentially important genes is relatively large. This leads to problems of numerical instability. To overcome these limitations, a few non-linear methods have recently been introduced to the area. Many of the existing non-linear methods have a couple of critical problems, the model selection problem and the model parameter tuning problem, that remain unsolved or even untouched. In general, a unified framework that allows model parameters of both linear and non-linear models to be easily tuned is always preferred in real-world applications. Kernel-induced learning methods form a class of approaches that show promising potentials to achieve this goal. Results A hierarchical statistical model named kernel-imbedded Gaussian process (KIGP) is developed under a unified Bayesian framework for binary disease classification problems using microarray gene expression data. In particular, based on a probit regression setting, an adaptive algorithm with a cascading structure is designed to find the appropriate kernel, to discover the potentially significant genes, and to make the optimal class prediction accordingly. A Gibbs sampler is built as the core of the algorithm to make Bayesian inferences. Simulation studies showed that, even without any knowledge of the underlying generative model, the KIGP performed very close to the theoretical Bayesian bound not only in the case with a linear Bayesian classifier but also in the case with a very non-linear Bayesian classifier. This sheds light on its broader usability to microarray data analysis problems, especially to those that linear methods work awkwardly. The KIGP was also applied to four published microarray datasets, and the results showed that the KIGP performed better than or at least as well as any of the referred state-of-the-art methods did in all of these cases. Conclusion Mathematically built on the kernel-induced feature space concept under a Bayesian framework, the KIGP method presented in this paper provides a unified machine learning approach to explore both the linear and the possibly non-linear underlying relationship between the target features of a given binary disease classification problem and the related explanatory gene expression data. More importantly, it incorporates the model parameter tuning into the framework. The model selection problem is addressed in the form of selecting a proper kernel type. The KIGP method also gives Bayesian probabilistic predictions for disease classification. These properties and features are beneficial to most real-world applications. The algorithm is naturally robust in numerical computation. The simulation studies and the published data studies demonstrated that the proposed KIGP performs satisfactorily and consistently. PMID:17328811

  11. A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.

    PubMed

    Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa

    2018-02-01

    Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.

  12. Direct Linearization and Adjoint Approaches to Evaluation of Atmospheric Weighting Functions and Surface Partial Derivatives: General Principles, Synergy and Areas of Application

    NASA Technical Reports Server (NTRS)

    Ustino, Eugene A.

    2006-01-01

    This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches

  13. Application of variational and Galerkin equations to linear and nonlinear finite element analysis

    NASA Technical Reports Server (NTRS)

    Yu, Y.-Y.

    1974-01-01

    The paper discusses the application of the variational equation to nonlinear finite element analysis. The problem of beam vibration with large deflection is considered. The variational equation is shown to be flexible in both the solution of a general problem and in the finite element formulation. Difficulties are shown to arise when Galerkin's equations are used in the consideration of the finite element formulation of two-dimensional linear elasticity and of the linear classical beam.

  14. Multilevel Preconditioners for Reaction-Diffusion Problems with Discontinuous Coefficients

    DOE PAGES

    Kolev, Tzanio V.; Xu, Jinchao; Zhu, Yunrong

    2015-08-23

    In this study, we extend some of the multilevel convergence results obtained by Xu and Zhu, to the case of second order linear reaction-diffusion equations. Specifically, we consider the multilevel preconditioners for solving the linear systems arising from the linear finite element approximation of the problem, where both diffusion and reaction coefficients are piecewise-constant functions. We discuss in detail the influence of both the discontinuous reaction and diffusion coefficients to the performance of the classical BPX and multigrid V-cycle preconditioner.

  15. Linearly Adjustable International Portfolios

    NASA Astrophysics Data System (ADS)

    Fonseca, R. J.; Kuhn, D.; Rustem, B.

    2010-09-01

    We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.

  16. Large Scale Underground Detectors in Europe

    NASA Astrophysics Data System (ADS)

    Katsanevas, S. K.

    2006-07-01

    The physics potential and the complementarity of the large scale underground European detectors: Water Cherenkov (MEMPHYS), Liquid Argon TPC (GLACIER) and Liquid Scintillator (LENA) is presented with emphasis on the major physics opportunities, namely proton decay, supernova detection and neutrino parameter determination using accelerator beams.

  17. Robert Frost and the Poetry of Physics.

    ERIC Educational Resources Information Center

    Coletta, W. John; Tamres, David H.

    1992-01-01

    Examines five poems by Robert Frost that illustrate Frost's interest in science. The poems include allusions to renowned physicists, metaphoric descriptions of some famous physics experiments, explorations of complementarity as enunciated by Bohr, and poetic formulations of Heisenberg's uncertainty principle. (20 references) (MDH)

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.

    When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modularmore » In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.« less

  19. An approximation theory for the identification of linear thermoelastic systems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.; Su, Chien-Hua Frank

    1990-01-01

    An abstract approximation framework and convergence theory for the identification of thermoelastic systems is developed. Starting from an abstract operator formulation consisting of a coupled second order hyperbolic equation of elasticity and first order parabolic equation for heat conduction, well-posedness is established using linear semigroup theory in Hilbert space, and a class of parameter estimation problems is then defined involving mild solutions. The approximation framework is based upon generic Galerkin approximation of the mild solutions, and convergence of solutions of the resulting sequence of approximating finite dimensional parameter identification problems to a solution of the original infinite dimensional inverse problem is established using approximation results for operator semigroups. An example involving the basic equations of one dimensional linear thermoelasticity and a linear spline based scheme are discussed. Numerical results indicate how the approach might be used in a study of damping mechanisms in flexible structures.

  20. VTOL controls for shipboard landing. M.S.Thesis

    NASA Technical Reports Server (NTRS)

    Mcmuldroch, C. G.

    1979-01-01

    The problem of landing a VTOL aircraft on a small ship in rough seas using an automatic controller is examined. The controller design uses the linear quadratic Gaussian results of modern control theory. Linear time invariant dynamic models are developed for the aircraft, ship, and wave motions. A hover controller commands the aircraft to track position and orientation of the ship deck using only low levels of control power. Commands for this task are generated by the solution of the steady state linear quadratic gaussian regulator problem. Analytical performance and control requirement tradeoffs are obtained. A landing controller commands the aircraft from stationary hover along a smooth, low control effort trajectory, to a touchdown on a predicted crest of ship motion. The design problem is formulated and solved as an approximate finite-time linear quadratic stochastic regulator. Performance and control results are found by Monte Carlo simulations.

  1. Homotopy approach to optimal, linear quadratic, fixed architecture compensation

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1991-01-01

    Optimal linear quadratic Gaussian compensators with constrained architecture are a sensible way to generate good multivariable feedback systems meeting strict implementation requirements. The optimality conditions obtained from the constrained linear quadratic Gaussian are a set of highly coupled matrix equations that cannot be solved algebraically except when the compensator is centralized and full order. An alternative to the use of general parameter optimization methods for solving the problem is to use homotopy. The benefit of the method is that it uses the solution to a simplified problem as a starting point and the final solution is then obtained by solving a simple differential equation. This paper investigates the convergence properties and the limitation of such an approach and sheds some light on the nature and the number of solutions of the constrained linear quadratic Gaussian problem. It also demonstrates the usefulness of homotopy on an example of an optimal decentralized compensator.

  2. Global optimization algorithm for heat exchanger networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quesada, I.; Grossmann, I.E.

    This paper deals with the global optimization of heat exchanger networks with fixed topology. It is shown that if linear area cost functions are assumed, as well as arithmetic mean driving force temperature differences in networks with isothermal mixing, the corresponding nonlinear programming (NLP) optimization problem involves linear constraints and a sum of linear fractional functions in the objective which are nonconvex. A rigorous algorithm is proposed that is based on a convex NLP underestimator that involves linear and nonlinear estimators for fractional and bilinear terms which provide a tight lower bound to the global optimum. This NLP problem ismore » used within a spatial branch and bound method for which branching rules are given. Basic properties of the proposed method are presented, and its application is illustrated with several example problems. The results show that the proposed method only requires few nodes in the branch and bound search.« less

  3. Resolvent approach for two-dimensional scattering problems. Application to the nonstationary Schrödinger problem and the KPI equation

    NASA Astrophysics Data System (ADS)

    Boiti, M.; Pempinelli, F.; Pogrebkov, A. K.; Polivanov, M. C.

    1992-11-01

    The resolvent operator of the linear problem is determined as the full Green function continued in the complex domain in two variables. An analog of the known Hilbert identity is derived. We demonstrate the role of this identity in the study of two-dimensional scattering. Considering the nonstationary Schrödinger equation as an example, we show that all types of solutions of the linear problems, as well as spectral data known in the literature, are given as specific values of this unique function — the resolvent function. A new form of the inverse problem is formulated.

  4. A Factorization Approach to the Linear Regulator Quadratic Cost Problem

    NASA Technical Reports Server (NTRS)

    Milman, M. H.

    1985-01-01

    A factorization approach to the linear regulator quadratic cost problem is developed. This approach makes some new connections between optimal control, factorization, Riccati equations and certain Wiener-Hopf operator equations. Applications of the theory to systems describable by evolution equations in Hilbert space and differential delay equations in Euclidean space are presented.

  5. Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.

    ERIC Educational Resources Information Center

    Shama, Gilli; Dreyfus, Tommy

    1994-01-01

    Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…

  6. Robust Neighboring Optimal Guidance for the Advanced Launch System

    NASA Technical Reports Server (NTRS)

    Hull, David G.

    1993-01-01

    In recent years, optimization has become an engineering tool through the availability of numerous successful nonlinear programming codes. Optimal control problems are converted into parameter optimization (nonlinear programming) problems by assuming the control to be piecewise linear, making the unknowns the nodes or junction points of the linear control segments. Once the optimal piecewise linear control (suboptimal) control is known, a guidance law for operating near the suboptimal path is the neighboring optimal piecewise linear control (neighboring suboptimal control). Research conducted under this grant has been directed toward the investigation of neighboring suboptimal control as a guidance scheme for an advanced launch system.

  7. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

    PubMed

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-06-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

  8. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-01-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560

  9. Accommodation of practical constraints by a linear programming jet select. [for Space Shuttle

    NASA Technical Reports Server (NTRS)

    Bergmann, E.; Weiler, P.

    1983-01-01

    An experimental spacecraft control system will be incorporated into the Space Shuttle flight software and exercised during a forthcoming mission to evaluate its performance and handling qualities. The control system incorporates a 'phase space' control law to generate rate change requests and a linear programming jet select to compute jet firings. Posed as a linear programming problem, jet selection must represent the rate change request as a linear combination of jet acceleration vectors where the coefficients are the jet firing times, while minimizing the fuel expended in satisfying that request. This problem is solved in real time using a revised Simplex algorithm. In order to implement the jet selection algorithm in the Shuttle flight control computer, it was modified to accommodate certain practical features of the Shuttle such as limited computer throughput, lengthy firing times, and a large number of control jets. To the authors' knowledge, this is the first such application of linear programming. It was made possible by careful consideration of the jet selection problem in terms of the properties of linear programming and the Simplex algorithm. These modifications to the jet select algorithm may by useful for the design of reaction controlled spacecraft.

  10. Quantifying the relative irreplaceability of important bird and biodiversity areas.

    PubMed

    Di Marco, Moreno; Brooks, Thomas; Cuttelod, Annabelle; Fishpool, Lincoln D C; Rondinini, Carlo; Smith, Robert J; Bennun, Leon; Butchart, Stuart H M; Ferrier, Simon; Foppen, Ruud P B; Joppa, Lucas; Juffe-Bignoli, Diego; Knight, Andrew T; Lamoreux, John F; Langhammer, Penny F; May, Ian; Possingham, Hugh P; Visconti, Piero; Watson, James E M; Woodley, Stephen

    2016-04-01

    World governments have committed to increase the global protected areas coverage by 2020, but the effectiveness of this commitment for protecting biodiversity depends on where new protected areas are located. Threshold- and complementarity-based approaches have been independently used to identify important sites for biodiversity. We brought together these approaches by performing a complementarity-based analysis of irreplaceability in important bird and biodiversity areas (IBAs), which are sites identified using a threshold-based approach. We determined whether irreplaceability values are higher inside than outside IBAs and whether any observed difference depends on known characteristics of the IBAs. We focused on 3 regions with comprehensive IBA inventories and bird distribution atlases: Australia, southern Africa, and Europe. Irreplaceability values were significantly higher inside than outside IBAs, although differences were much smaller in Europe than elsewhere. Higher irreplaceability values in IBAs were associated with the presence and number of restricted-range species; number of criteria under which the site was identified; and mean geographic range size of the species for which the site was identified (trigger species). In addition, IBAs were characterized by higher irreplaceability values when using proportional species representation targets, rather than fixed targets. There were broadly comparable results when measuring irreplaceability for trigger species and when considering all bird species, which indicates a good surrogacy effect of the former. Recently, the International Union for Conservation of Nature has convened a consultation to consolidate global standards for the identification of key biodiversity areas (KBAs), building from existing approaches such as IBAs. Our results informed this consultation, and in particular a proposed irreplaceability criterion that will allow the new KBA standard to draw on the strengths of both threshold- and complementarity-based approaches. © 2015 Society for Conservation Biology.

  11. Structural and functional analyses reveal the contributions of the C- and N-lobes of Argonaute protein to selectivity of RNA target cleavage.

    PubMed

    Dayeh, Daniel M; Kruithoff, Bradley C; Nakanishi, Kotaro

    2018-04-27

    Some gene transcripts have cellular functions as regulatory noncoding RNAs. For example, ∼23-nucleotide (nt)-long siRNAs are loaded into Argonaute proteins. The resultant ribonucleoprotein assembly, the RNA-induced silencing complex (RISC), cleaves RNAs that are extensively base-paired with the loaded siRNA. To date, base complementarity is recognized as the major determinant of specific target cleavage (or slicing), but little is known about how Argonaute inspects base pairing before cleavage. A hallmark of Argonaute proteins is their bilobal structure, but despite the significance of this structure for curtailing slicing activity against mismatched targets, the molecular mechanism remains elusive. Here, our structural and functional studies of a bilobed yeast Argonaute protein and its isolated catalytic C-terminal lobe (C-lobe) revealed that the C-lobe alone retains almost all properties of bilobed Argonaute: siRNA-duplex loading, passenger cleavage/ejection, and siRNA-dependent RNA cleavage. A 2.1 Å-resolution crystal structure revealed that the catalytic C-lobe mirrors the bilobed Argonaute in terms of guide-RNA recognition and that all requirements for transitioning to the catalytically active conformation reside in the C-lobe. Nevertheless, we found that in the absence of the N-terminal lobe (N-lobe), target RNAs are scanned for complementarity only at positions 5-14 on a 23-nt guide RNA before endonucleolytic cleavage, thereby allowing for some off-target cleavage. Of note, acquisition of an N-lobe expanded the range of the guide RNA strand used for inspecting target complementarity to positions 2-23. These findings offer clues to the evolution of the bilobal structure of catalytically active Argonaute proteins. © 2018 by The American Society for Biochemistry and Molecular Biology, Inc.

  12. Highly viscous antibody solutions are a consequence of network formation caused by domain-domain electrostatic complementarities: insights from coarse-grained simulations.

    PubMed

    Buck, Patrick M; Chaudhri, Anuj; Kumar, Sandeep; Singh, Satish K

    2015-01-05

    Therapeutic monoclonal antibody (mAb) candidates that form highly viscous solutions at concentrations above 100 mg/mL can lead to challenges in bioprocessing, formulation development, and subcutaneous drug delivery. Earlier studies of mAbs with concentration-dependent high viscosity have indicated that mAbs with negatively charged Fv regions have a dipole-like quality that increases the likelihood of reversible self-association. This suggests that weak electrostatic intermolecular interactions can form transient antibody networks that participate in resistance to solution deformation under shear stress. Here this hypothesis is explored by parametrizing a coarse-grained (CG) model of an antibody using the domain charges from four different mAbs that have had their concentration-dependent viscosity behaviors previously determined. Multicopy molecular dynamics simulations were performed for these four CG mAbs at several concentrations to understand the effect of surface charge on mass diffusivity, pairwise interactions, and electrostatic network formation. Diffusion coefficients computed from simulations were in qualitative agreement with experimentally determined viscosities for all four mAbs. Contact analysis revealed an overall greater number of pairwise interactions for the two mAbs in this study with high concentration viscosity issues. Further, using equilibrated solution trajectories, the two mAbs with high concentration viscosity issues quantitatively formed more features of an electrostatic network than the other mAbs. The change in the number of these network features as a function of concentration is related to the number of pairwise interactions formed by electrostatic complementarities between antibody domains. Thus, transient antibody network formation caused by domain-domain electrostatic complementarities is the most probable origin of high concentration viscosity for mAbs in this study.

  13. Crystal structure of an antibody bound to an immunodominant peptide epitope: novel features in peptide-antibody recognition.

    PubMed

    Nair, D T; Singh, K; Sahu, N; Rao, K V; Salunke, D M

    2000-12-15

    The crystal structure of Fab of an Ab PC283 complexed with its corresponding peptide Ag, PS1 (HQLDPAFGANSTNPD), derived from the hepatitis B virus surface Ag was determined. The PS1 stretch Gln2P to Phe7P is present in the Ag binding site of the Ab, while the next three residues of the peptide are raised above the binding groove. The residues Ser11P, Thr12P, and Asn13P then loop back onto the Ag-binding site of the Ab. The last two residues, Pro14P and Asp15P, extend outside the binding site without forming any contacts with the Ab. The PC283-PS1 complex is among the few examples where the light chain complementarity-determining regions show more interactions than the heavy chain complementarity-determining regions, and a distal framework residue is involved in Ag binding. As seen from the crystal structure, most of the contacts between peptide and Ab are through the five residues, Leu3-Asp4-Pro5-Ala6-Phe7, of PS1. The paratope is predominantly hydrophobic with aromatic residues lining the binding pocket, although a salt bridge also contributes to stabilizing the Ag-Ab interaction. The molecular surface area buried upon PS1 binding is 756 A(2) for the peptide and 625 A(2) for the Fab, which is higher than what has been seen to date for Ab-peptide complexes. A comparison between PC283 structure and a homology model of its germline ancestor suggests that paratope optimization for PS1 occurs by improving both charge and shape complementarity.

  14. Getting the full picture: Assessing the complementarity of citizen science and agency monitoring data.

    PubMed

    Hadj-Hammou, Jeneen; Loiselle, Steven; Ophof, Daniel; Thornhill, Ian

    2017-01-01

    While the role of citizen science in engaging the public and providing large-scale datasets has been demonstrated, the nature of and potential for this science to supplement environmental monitoring efforts by government agencies has not yet been fully explored. To this end, the present study investigates the complementarity of a citizen science programme to agency monitoring of water quality. The Environment Agency (EA) is the governmental public body responsible for, among other duties, managing and monitoring water quality and water resources in England. FreshWater Watch (FWW) is a global citizen science project that supports community monitoring of freshwater quality. FWW and EA data were assessed for their spatio-temporal complementarity by comparing the geographical and seasonal coverage of nitrate (N-NO3) sampling across the River Thames catchment by the respective campaigns between spring 2013 and winter 2015. The analysis reveals that FWW citizen science-collected data complements EA data by filling in both gaps in the spatial and temporal coverage as well as gaps in waterbody type and size. In addition, partial spatio-temporal overlap in sampling efforts by the two actors is discovered, but EA sampling is found to be more consistent than FWW sampling. Statistical analyses indicate that regardless of broader geographical overlap in sampling effort, FWW sampling sites are associated with a lower stream order and water bodies of smaller surface areas than EA sampling sites. FWW also samples more still-water body sites than the EA. As a possible result of such differences in sampling tendencies, nitrate concentrations, a measure of water quality, are lower for FWW sites than EA sites. These findings strongly indicate that citizen science has clear potential to complement agency monitoring efforts by generating information on freshwater ecosystems that would otherwise be under reported.

  15. Getting the full picture: Assessing the complementarity of citizen science and agency monitoring data

    PubMed Central

    Loiselle, Steven; Ophof, Daniel; Thornhill, Ian

    2017-01-01

    While the role of citizen science in engaging the public and providing large-scale datasets has been demonstrated, the nature of and potential for this science to supplement environmental monitoring efforts by government agencies has not yet been fully explored. To this end, the present study investigates the complementarity of a citizen science programme to agency monitoring of water quality. The Environment Agency (EA) is the governmental public body responsible for, among other duties, managing and monitoring water quality and water resources in England. FreshWater Watch (FWW) is a global citizen science project that supports community monitoring of freshwater quality. FWW and EA data were assessed for their spatio-temporal complementarity by comparing the geographical and seasonal coverage of nitrate (N-NO3) sampling across the River Thames catchment by the respective campaigns between spring 2013 and winter 2015. The analysis reveals that FWW citizen science-collected data complements EA data by filling in both gaps in the spatial and temporal coverage as well as gaps in waterbody type and size. In addition, partial spatio-temporal overlap in sampling efforts by the two actors is discovered, but EA sampling is found to be more consistent than FWW sampling. Statistical analyses indicate that regardless of broader geographical overlap in sampling effort, FWW sampling sites are associated with a lower stream order and water bodies of smaller surface areas than EA sampling sites. FWW also samples more still-water body sites than the EA. As a possible result of such differences in sampling tendencies, nitrate concentrations, a measure of water quality, are lower for FWW sites than EA sites. These findings strongly indicate that citizen science has clear potential to complement agency monitoring efforts by generating information on freshwater ecosystems that would otherwise be under reported. PMID:29211752

  16. Mate choice for major histocompatibility complex complementarity in a strictly monogamous bird, the grey partridge (Perdix perdix).

    PubMed

    Rymešová, Dana; Králová, Tereza; Promerová, Marta; Bryja, Josef; Tomášek, Oldřich; Svobodová, Jana; Šmilauer, Petr; Šálek, Miroslav; Albrecht, Tomáš

    2017-01-01

    Sexual selection has been hypothesised as favouring mate choice resulting in production of viable offspring with genotypes providing high pathogen resistance. Specific pathogen recognition is mediated by genes of the major histocompatibility complex (MHC) encoding proteins fundamental for adaptive immune response in jawed vertebrates. MHC genes may also play a role in odour-based individual recognition and mate choice, aimed at avoiding inbreeding. MHC genes are known to be involved in mate choice in a number of species, with 'good genes' (absolute criteria) and 'complementary genes' (self-referential criteria) being used to explain MHC-based mating. Here, we focus on the effect of morphological traits and variation and genetic similarity between individuals in MHC class IIB (MHCIIB) exon 2 on mating in a free-living population of a monogamous bird, the grey partridge. We found no evidence for absolute mate choice criteria as regards grey partridge MHCIIB genotypes, i.e., number and occurrence of amino acid variants, though red chroma of the spot behind eyes was positively associated with male pairing success. On the other hand, mate choice at MHCIIB was based on relative criteria as females preferentially paired with more dissimilar males having a lower number of shared amino acid variants. This observation supports the 'inbreeding avoidance' and 'complementary genes' hypotheses. Our study provides one of the first pieces of evidence for MHC-based mate choice for genetic complementarity in a strictly monogamous bird. The statistical approach employed can be recommended for testing mating preferences in cases where availability of potential mates (recorded with an appropriate method such as radio-tracking) shows considerable temporal variation. Additional genetic analyses using neutral markers may detect whether MHC-based mate choice for complementarity emerges as a by-product of general inbreeding avoidance in grey partridges.

  17. NetCooperate: a network-based tool for inferring host-microbe and microbe-microbe cooperation.

    PubMed

    Levy, Roie; Carr, Rogan; Kreimer, Anat; Freilich, Shiri; Borenstein, Elhanan

    2015-05-17

    Host-microbe and microbe-microbe interactions are often governed by the complex exchange of metabolites. Such interactions play a key role in determining the way pathogenic and commensal species impact their host and in the assembly of complex microbial communities. Recently, several studies have demonstrated how such interactions are reflected in the organization of the metabolic networks of the interacting species, and introduced various graph theory-based methods to predict host-microbe and microbe-microbe interactions directly from network topology. Using these methods, such studies have revealed evolutionary and ecological processes that shape species interactions and community assembly, highlighting the potential of this reverse-ecology research paradigm. NetCooperate is a web-based tool and a software package for determining host-microbe and microbe-microbe cooperative potential. It specifically calculates two previously developed and validated metrics for species interaction: the Biosynthetic Support Score which quantifies the ability of a host species to supply the nutritional requirements of a parasitic or a commensal species, and the Metabolic Complementarity Index which quantifies the complementarity of a pair of microbial organisms' niches. NetCooperate takes as input a pair of metabolic networks, and returns the pairwise metrics as well as a list of potential syntrophic metabolic compounds. The Biosynthetic Support Score and Metabolic Complementarity Index provide insight into host-microbe and microbe-microbe metabolic interactions. NetCooperate determines these interaction indices from metabolic network topology, and can be used for small- or large-scale analyses. NetCooperate is provided as both a web-based tool and an open-source Python module; both are freely available online at http://elbo.gs.washington.edu/software_netcooperate.html.

  18. A comparison of several methods of solving nonlinear regression groundwater flow problems

    USGS Publications Warehouse

    Cooley, Richard L.

    1985-01-01

    Computational efficiency and computer memory requirements for four methods of minimizing functions were compared for four test nonlinear-regression steady state groundwater flow problems. The fastest methods were the Marquardt and quasi-linearization methods, which required almost identical computer times and numbers of iterations; the next fastest was the quasi-Newton method, and last was the Fletcher-Reeves method, which did not converge in 100 iterations for two of the problems. The fastest method per iteration was the Fletcher-Reeves method, and this was followed closely by the quasi-Newton method. The Marquardt and quasi-linearization methods were slower. For all four methods the speed per iteration was directly related to the number of parameters in the model. However, this effect was much more pronounced for the Marquardt and quasi-linearization methods than for the other two. Hence the quasi-Newton (and perhaps Fletcher-Reeves) method might be more efficient than either the Marquardt or quasi-linearization methods if the number of parameters in a particular model were large, although this remains to be proven. The Marquardt method required somewhat less central memory than the quasi-linearization metilod for three of the four problems. For all four problems the quasi-Newton method required roughly two thirds to three quarters of the memory required by the Marquardt method, and the Fletcher-Reeves method required slightly less memory than the quasi-Newton method. Memory requirements were not excessive for any of the four methods.

  19. Graph-cut based discrete-valued image reconstruction.

    PubMed

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  20. Solving a mixture of many random linear equations by tensor decomposition and alternating minimization.

    DOT National Transportation Integrated Search

    2016-09-01

    We consider the problem of solving mixed random linear equations with k components. This is the noiseless setting of mixed linear regression. The goal is to estimate multiple linear models from mixed samples in the case where the labels (which sample...

  1. The checkpoint ordering problem

    PubMed Central

    Hungerländer, P.

    2017-01-01

    Abstract We suggest a new variant of a row layout problem: Find an ordering of n departments with given lengths such that the total weighted sum of their distances to a given checkpoint is minimized. The Checkpoint Ordering Problem (COP) is both of theoretical and practical interest. It has several applications and is conceptually related to some well-studied combinatorial optimization problems, namely the Single-Row Facility Layout Problem, the Linear Ordering Problem and a variant of parallel machine scheduling. In this paper we study the complexity of the (COP) and its special cases. The general version of the (COP) with an arbitrary but fixed number of checkpoints is NP-hard in the weak sense. We propose both a dynamic programming algorithm and an integer linear programming approach for the (COP) . Our computational experiments indicate that the (COP) is hard to solve in practice. While the run time of the dynamic programming algorithm strongly depends on the length of the departments, the integer linear programming approach is able to solve instances with up to 25 departments to optimality. PMID:29170574

  2. Shifting the closed-loop spectrum in the optimal linear quadratic regulator problem for hereditary systems

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1985-01-01

    In the optimal linear quadratic regulator problem for finite dimensional systems, the method known as an alpha-shift can be used to produce a closed-loop system whose spectrum lies to the left of some specified vertical line; that is, a closed-loop system with a prescribed degree of stability. This paper treats the extension of the alpha-shift to hereditary systems. As infinite dimensions, the shift can be accomplished by adding alpha times the identity to the open-loop semigroup generator and then solving an optimal regulator problem. However, this approach does not work with a new approximation scheme for hereditary control problems recently developed by Kappel and Salamon. Since this scheme is among the best to date for the numerical solution of the linear regulator problem for hereditary systems, an alternative method for shifting the closed-loop spectrum is needed. An alpha-shift technique that can be used with the Kappel-Salamon approximation scheme is developed. Both the continuous-time and discrete-time problems are considered. A numerical example which demonstrates the feasibility of the method is included.

  3. Shifting the closed-loop spectrum in the optimal linear quadratic regulator problem for hereditary systems

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1987-01-01

    In the optimal linear quadratic regulator problem for finite dimensional systems, the method known as an alpha-shift can be used to produce a closed-loop system whose spectrum lies to the left of some specified vertical line; that is, a closed-loop system with a prescribed degree of stability. This paper treats the extension of the alpha-shift to hereditary systems. As infinite dimensions, the shift can be accomplished by adding alpha times the identity to the open-loop semigroup generator and then solving an optimal regulator problem. However, this approach does not work with a new approximation scheme for hereditary control problems recently developed by Kappel and Salamon. Since this scheme is among the best to date for the numerical solution of the linear regulator problem for hereditary systems, an alternative method for shifting the closed-loop spectrum is needed. An alpha-shift technique that can be used with the Kappel-Salamon approximation scheme is developed. Both the continuous-time and discrete-time problems are considered. A numerical example which demonstrates the feasibility of the method is included.

  4. Three-dimensional Finite Element Formulation and Scalable Domain Decomposition for High Fidelity Rotor Dynamic Analysis

    NASA Technical Reports Server (NTRS)

    Datta, Anubhav; Johnson, Wayne R.

    2009-01-01

    This paper has two objectives. The first objective is to formulate a 3-dimensional Finite Element Model for the dynamic analysis of helicopter rotor blades. The second objective is to implement and analyze a dual-primal iterative substructuring based Krylov solver, that is parallel and scalable, for the solution of the 3-D FEM analysis. The numerical and parallel scalability of the solver is studied using two prototype problems - one for ideal hover (symmetric) and one for a transient forward flight (non-symmetric) - both carried out on up to 48 processors. In both hover and forward flight conditions, a perfect linear speed-up is observed, for a given problem size, up to the point of substructure optimality. Substructure optimality and the linear parallel speed-up range are both shown to depend on the problem size as well as on the selection of the coarse problem. With a larger problem size, linear speed-up is restored up to the new substructure optimality. The solver also scales with problem size - even though this conclusion is premature given the small prototype grids considered in this study.

  5. Towards lexicographic multi-objective linear programming using grossone methodology

    NASA Astrophysics Data System (ADS)

    Cococcioni, Marco; Pappalardo, Massimo; Sergeyev, Yaroslav D.

    2016-10-01

    Lexicographic Multi-Objective Linear Programming (LMOLP) problems can be solved in two ways: preemptive and nonpreemptive. The preemptive approach requires the solution of a series of LP problems, with changing constraints (each time the next objective is added, a new constraint appears). The nonpreemptive approach is based on a scalarization of the multiple objectives into a single-objective linear function by a weighted combination of the given objectives. It requires the specification of a set of weights, which is not straightforward and can be time consuming. In this work we present both mathematical and software ingredients necessary to solve LMOLP problems using a recently introduced computational methodology (allowing one to work numerically with infinities and infinitesimals) based on the concept of grossone. The ultimate goal of such an attempt is an implementation of a simplex-like algorithm, able to solve the original LMOLP problem by solving only one single-objective problem and without the need to specify finite weights. The expected advantages are therefore obvious.

  6. Numerical Solution of Systems of Loaded Ordinary Differential Equations with Multipoint Conditions

    NASA Astrophysics Data System (ADS)

    Assanova, A. T.; Imanchiyev, A. E.; Kadirbayeva, Zh. M.

    2018-04-01

    A system of loaded ordinary differential equations with multipoint conditions is considered. The problem under study is reduced to an equivalent boundary value problem for a system of ordinary differential equations with parameters. A system of linear algebraic equations for the parameters is constructed using the matrices of the loaded terms and the multipoint condition. The conditions for the unique solvability and well-posedness of the original problem are established in terms of the matrix made up of the coefficients of the system of linear algebraic equations. The coefficients and the righthand side of the constructed system are determined by solving Cauchy problems for linear ordinary differential equations. The solutions of the system are found in terms of the values of the desired function at the initial points of subintervals. The parametrization method is numerically implemented using the fourth-order accurate Runge-Kutta method as applied to the Cauchy problems for ordinary differential equations. The performance of the constructed numerical algorithms is illustrated by examples.

  7. Linear time-varying models can reveal non-linear interactions of biomolecular regulatory networks using multiple time-series data.

    PubMed

    Kim, Jongrae; Bates, Declan G; Postlethwaite, Ian; Heslop-Harrison, Pat; Cho, Kwang-Hyun

    2008-05-15

    Inherent non-linearities in biomolecular interactions make the identification of network interactions difficult. One of the principal problems is that all methods based on the use of linear time-invariant models will have fundamental limitations in their capability to infer certain non-linear network interactions. Another difficulty is the multiplicity of possible solutions, since, for a given dataset, there may be many different possible networks which generate the same time-series expression profiles. A novel algorithm for the inference of biomolecular interaction networks from temporal expression data is presented. Linear time-varying models, which can represent a much wider class of time-series data than linear time-invariant models, are employed in the algorithm. From time-series expression profiles, the model parameters are identified by solving a non-linear optimization problem. In order to systematically reduce the set of possible solutions for the optimization problem, a filtering process is performed using a phase-portrait analysis with random numerical perturbations. The proposed approach has the advantages of not requiring the system to be in a stable steady state, of using time-series profiles which have been generated by a single experiment, and of allowing non-linear network interactions to be identified. The ability of the proposed algorithm to correctly infer network interactions is illustrated by its application to three examples: a non-linear model for cAMP oscillations in Dictyostelium discoideum, the cell-cycle data for Saccharomyces cerevisiae and a large-scale non-linear model of a group of synchronized Dictyostelium cells. The software used in this article is available from http://sbie.kaist.ac.kr/software

  8. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.

  9. [Power, interdependence and complementarity in hospital work: an analysis from the nursing point of view].

    PubMed

    Lopes, M J

    1997-01-01

    This essay intends to discuss recent transformation both to hospital work and nursing work specifically. Analysis privilege inter and intra relations with multidisciplinary teams which is constituted of practices on the therapeutic process present in hospital space-time.

  10. Analysis of Relational Communication in Dyads: New Measurement Procedures.

    ERIC Educational Resources Information Center

    Rogers, L. Edna; Farace, Richard

    Relational communication refers to the control or dominance aspects of message exchange in dyads--distinguishing it from the report or referential aspects of communication. In relational communicational analysis, messages as transactions are emphasized; major theoretical concepts which emerge are symmetry, transitoriness, and complementarity of…

  11. Educating for Safety.

    ERIC Educational Resources Information Center

    Rothe, J. Peter

    1991-01-01

    To enhance the chance for success in educating young drivers, there should be a balance between the content, structure, and goals of traffic safety programs and the normative rules governing young people's lives. Presents recommendations for safety education based on the notion of complementarity and using a multiperspective approach. (AF)

  12. Cognitive-Developmental and Behavior-Analytic Theories: Evolving into Complementarity

    ERIC Educational Resources Information Center

    Overton, Willis F.; Ennis, Michelle D.

    2006-01-01

    Historically, cognitive-developmental and behavior-analytic approaches to the study of human behavior change and development have been presented as incompatible alternative theoretical and methodological perspectives. This presumed incompatibility has been understood as arising from divergent sets of metatheoretical assumptions that take the form…

  13. The Space Infrared Interferometric Telescope (SPIRIT) and its Complementarity to ALMA

    NASA Technical Reports Server (NTRS)

    Leisawitz, Dave

    2007-01-01

    We report results of a pre-Formulation Phase study of SPIRIT, a candidate NASA Origins Probe mission. SPIRIT is a spatial and spectral interferometer with an operating wavelength range 25 - 400 microns. SPIRIT will provide sub-arcsecond resolution images and spectra with resolution R = 3000 in a 1 arcmin field of view to accomplish three primary scientific objectives: (1) Learn how planetary systems form from protostellar disks, and how they acquire their chemical organization; (2) Characterize the family of extrasolar planetary systems by imaging the structure in debris disks to understand how and where planets of different types form; and (3) Learn how high-redshift galaxies formed and merged to form the present-day population of galaxies. In each of these science domains, SPIRIT will yield information complementary to that obtainable with the James Webb Space Telescope (JWST)and the Atacama Large Millimeter Array (ALMA), and all three observatories could operate contemporaneously. Here we shall emphasize the SPIRIT science goals (1) and (2) and the mission's complementarity with ALMA.

  14. Stability of neutrino parameters and self-complementarity relation with varying SUSY breaking scale

    NASA Astrophysics Data System (ADS)

    Singh, K. Sashikanta; Roy, Subhankar; Singh, N. Nimai

    2018-03-01

    The scale at which supersymmetry (SUSY) breaks (ms) is still unknown. The present article, following a top-down approach, endeavors to study the effect of varying ms on the radiative stability of the observational parameters associated with the neutrino mixing. These parameters get additional contributions in the minimal supersymmetric model (MSSM). A variation in ms will influence the bounds for which the Standard Model (SM) and MSSM work and hence, will account for the different radiative contributions received from both sectors, respectively, while running the renormalization group equations (RGE). The present work establishes the invariance of the self complementarity relation among the three mixing angles, θ13+θ12≈θ23 against the radiative evolution. A similar result concerning the mass ratio, m2:m1 is also found to be valid. In addition to varying ms, the work incorporates a range of different seesaw (SS) scales and tries to see how the latter affects the parameters.

  15. Evaluating multiple determinants of the structure of plant-animal mutualistic networks.

    PubMed

    Vázquez, Diego P; Chacoff, Natacha P; Cagnolo, Luciano

    2009-08-01

    The structure of mutualistic networks is likely to result from the simultaneous influence of neutrality and the constraints imposed by complementarity in species phenotypes, phenologies, spatial distributions, phylogenetic relationships, and sampling artifacts. We develop a conceptual and methodological framework to evaluate the relative contributions of these potential determinants. Applying this approach to the analysis of a plant-pollinator network, we show that information on relative abundance and phenology suffices to predict several aggregate network properties (connectance, nestedness, interaction evenness, and interaction asymmetry). However, such information falls short of predicting the detailed network structure (the frequency of pairwise interactions), leaving a large amount of variation unexplained. Taken together, our results suggest that both relative species abundance and complementarity in spatiotemporal distribution contribute substantially to generate observed network patters, but that this information is by no means sufficient to predict the occurrence and frequency of pairwise interactions. Future studies could use our methodological framework to evaluate the generality of our findings in a representative sample of study systems with contrasting ecological conditions.

  16. Identification of a sequence element on the 3' side of AAUAAA which is necessary for simian virus 40 late mRNA 3'-end processing.

    PubMed Central

    Sadofsky, M; Connelly, S; Manley, J L; Alwine, J C

    1985-01-01

    Our previous studies of the 3'-end processing of simian virus 40 late mRNAs indicated the existence of an essential element (or elements) downstream of the AAUAAA signal. We report here the use of transient expression analysis to study a functional element which we located within the sequence AGGUUUUUU, beginning 59 nucleotides downstream of the recognized signal AAUAAA. Deletion of this element resulted in (i) at least a 75% drop in 3'-end processing at the normal site and (ii) appearance of readthrough transcripts with alternate 3' ends. Some flexibility in the downstream position of this element relative to the AAUAAA was noted by deletion analysis. Using computer sequence comparison, we located homologous regions within downstream sequences of other genes, suggesting a generalized sequence element. In addition, specific complementarity is noted between the downstream element and U4 RNA. The possibility that this complementarity could participate in 3'-end site selection is discussed. Images PMID:3016512

  17. Humanization of Antibodies Using Heavy Chain Complementarity-determining Region 3 Grafting Coupled with in Vitro Somatic Hypermutation*

    PubMed Central

    Bowers, Peter M.; Neben, Tamlyn Y.; Tomlinson, Geoffery L.; Dalton, Jennifer L.; Altobell, Larry; Zhang, Xue; Macomber, John L.; Wu, Betty F.; Toobian, Rachelle M.; McConnell, Audrey D.; Verdino, Petra; Chau, Betty; Horlick, Robert A.; King, David J.

    2013-01-01

    A method for simultaneous humanization and affinity maturation of monoclonal antibodies has been developed using heavy chain complementarity-determining region (CDR) 3 grafting combined with somatic hypermutation in vitro. To minimize the amount of murine antibody-derived antibody sequence used during humanization, only the CDR3 region from a murine antibody that recognizes the cytokine hβNGF was grafted into a nonhomologous human germ line V region. The resulting CDR3-grafted HC was paired with a CDR-grafted light chain, displayed on the surface of HEK293 cells, and matured using in vitro somatic hypermutation. A high affinity humanized antibody was derived that was considerably more potent than the parental antibody, possessed a low pm dissociation constant, and demonstrated potent inhibition of hβNGF activity in vitro. The resulting antibody contained half the heavy chain murine donor sequence compared with the same antibody humanized using traditional methods. PMID:23355464

  18. Quantum subsystems: Exploring the complementarity of quantum privacy and error correction

    NASA Astrophysics Data System (ADS)

    Jochym-O'Connor, Tomas; Kribs, David W.; Laflamme, Raymond; Plosker, Sarah

    2014-09-01

    This paper addresses and expands on the contents of the recent Letter [Phys. Rev. Lett. 111, 030502 (2013), 10.1103/PhysRevLett.111.030502] discussing private quantum subsystems. Here we prove several previously presented results, including a condition for a given random unitary channel to not have a private subspace (although this does not mean that private communication cannot occur, as was previously demonstrated via private subsystems) and algebraic conditions that characterize when a general quantum subsystem or subspace code is private for a quantum channel. These conditions can be regarded as the private analog of the Knill-Laflamme conditions for quantum error correction, and we explore how the conditions simplify in some special cases. The bridge between quantum cryptography and quantum error correction provided by complementary quantum channels motivates the study of a new, more general definition of quantum error-correcting code, and we initiate this study here. We also consider the concept of complementarity for the general notion of a private quantum subsystem.

  19. ORACLS- OPTIMAL REGULATOR ALGORITHMS FOR THE CONTROL OF LINEAR SYSTEMS (CDC VERSION)

    NASA Technical Reports Server (NTRS)

    Armstrong, E. S.

    1994-01-01

    This control theory design package, called Optimal Regulator Algorithms for the Control of Linear Systems (ORACLS), was developed to aid in the design of controllers and optimal filters for systems which can be modeled by linear, time-invariant differential and difference equations. Optimal linear quadratic regulator theory, currently referred to as the Linear-Quadratic-Gaussian (LQG) problem, has become the most widely accepted method of determining optimal control policy. Within this theory, the infinite duration time-invariant problems, which lead to constant gain feedback control laws and constant Kalman-Bucy filter gains for reconstruction of the system state, exhibit high tractability and potential ease of implementation. A variety of new and efficient methods in the field of numerical linear algebra have been combined into the ORACLS program, which provides for the solution to time-invariant continuous or discrete LQG problems. The ORACLS package is particularly attractive to the control system designer because it provides a rigorous tool for dealing with multi-input and multi-output dynamic systems in both continuous and discrete form. The ORACLS programming system is a collection of subroutines which can be used to formulate, manipulate, and solve various LQG design problems. The ORACLS program is constructed in a manner which permits the user to maintain considerable flexibility at each operational state. This flexibility is accomplished by providing primary operations, analysis of linear time-invariant systems, and control synthesis based on LQG methodology. The input-output routines handle the reading and writing of numerical matrices, printing heading information, and accumulating output information. The basic vector-matrix operations include addition, subtraction, multiplication, equation, norm construction, tracing, transposition, scaling, juxtaposition, and construction of null and identity matrices. The analysis routines provide for the following computations: the eigenvalues and eigenvectors of real matrices; the relative stability of a given matrix; matrix factorization; the solution of linear constant coefficient vector-matrix algebraic equations; the controllability properties of a linear time-invariant system; the steady-state covariance matrix of an open-loop stable system forced by white noise; and the transient response of continuous linear time-invariant systems. The control law design routines of ORACLS implement some of the more common techniques of time-invariant LQG methodology. For the finite-duration optimal linear regulator problem with noise-free measurements, continuous dynamics, and integral performance index, a routine is provided which implements the negative exponential method for finding both the transient and steady-state solutions to the matrix Riccati equation. For the discrete version of this problem, the method of backwards differencing is applied to find the solutions to the discrete Riccati equation. A routine is also included to solve the steady-state Riccati equation by the Newton algorithms described by Klein, for continuous problems, and by Hewer, for discrete problems. Another routine calculates the prefilter gain to eliminate control state cross-product terms in the quadratic performance index and the weighting matrices for the sampled data optimal linear regulator problem. For cases with measurement noise, duality theory and optimal regulator algorithms are used to calculate solutions to the continuous and discrete Kalman-Bucy filter problems. Finally, routines are included to implement the continuous and discrete forms of the explicit (model-in-the-system) and implicit (model-in-the-performance-index) model following theory. These routines generate linear control laws which cause the output of a dynamic time-invariant system to track the output of a prescribed model. In order to apply ORACLS, the user must write an executive (driver) program which inputs the problem coefficients, formulates and selects the routines to be used to solve the problem, and specifies the desired output. There are three versions of ORACLS source code available for implementation: CDC, IBM, and DEC. The CDC version has been implemented on a CDC 6000 series computer with a central memory of approximately 13K (octal) of 60 bit words. The CDC version is written in FORTRAN IV, was developed in 1978, and last updated in 1989. The IBM version has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The IBM version is written in FORTRAN IV and was generated in 1981. The DEC version has been implemented on a VAX series computer operating under VMS. The VAX version is written in FORTRAN 77 and was generated in 1986.

  20. ORACLS- OPTIMAL REGULATOR ALGORITHMS FOR THE CONTROL OF LINEAR SYSTEMS (DEC VAX VERSION)

    NASA Technical Reports Server (NTRS)

    Frisch, H.

    1994-01-01

    This control theory design package, called Optimal Regulator Algorithms for the Control of Linear Systems (ORACLS), was developed to aid in the design of controllers and optimal filters for systems which can be modeled by linear, time-invariant differential and difference equations. Optimal linear quadratic regulator theory, currently referred to as the Linear-Quadratic-Gaussian (LQG) problem, has become the most widely accepted method of determining optimal control policy. Within this theory, the infinite duration time-invariant problems, which lead to constant gain feedback control laws and constant Kalman-Bucy filter gains for reconstruction of the system state, exhibit high tractability and potential ease of implementation. A variety of new and efficient methods in the field of numerical linear algebra have been combined into the ORACLS program, which provides for the solution to time-invariant continuous or discrete LQG problems. The ORACLS package is particularly attractive to the control system designer because it provides a rigorous tool for dealing with multi-input and multi-output dynamic systems in both continuous and discrete form. The ORACLS programming system is a collection of subroutines which can be used to formulate, manipulate, and solve various LQG design problems. The ORACLS program is constructed in a manner which permits the user to maintain considerable flexibility at each operational state. This flexibility is accomplished by providing primary operations, analysis of linear time-invariant systems, and control synthesis based on LQG methodology. The input-output routines handle the reading and writing of numerical matrices, printing heading information, and accumulating output information. The basic vector-matrix operations include addition, subtraction, multiplication, equation, norm construction, tracing, transposition, scaling, juxtaposition, and construction of null and identity matrices. The analysis routines provide for the following computations: the eigenvalues and eigenvectors of real matrices; the relative stability of a given matrix; matrix factorization; the solution of linear constant coefficient vector-matrix algebraic equations; the controllability properties of a linear time-invariant system; the steady-state covariance matrix of an open-loop stable system forced by white noise; and the transient response of continuous linear time-invariant systems. The control law design routines of ORACLS implement some of the more common techniques of time-invariant LQG methodology. For the finite-duration optimal linear regulator problem with noise-free measurements, continuous dynamics, and integral performance index, a routine is provided which implements the negative exponential method for finding both the transient and steady-state solutions to the matrix Riccati equation. For the discrete version of this problem, the method of backwards differencing is applied to find the solutions to the discrete Riccati equation. A routine is also included to solve the steady-state Riccati equation by the Newton algorithms described by Klein, for continuous problems, and by Hewer, for discrete problems. Another routine calculates the prefilter gain to eliminate control state cross-product terms in the quadratic performance index and the weighting matrices for the sampled data optimal linear regulator problem. For cases with measurement noise, duality theory and optimal regulator algorithms are used to calculate solutions to the continuous and discrete Kalman-Bucy filter problems. Finally, routines are included to implement the continuous and discrete forms of the explicit (model-in-the-system) and implicit (model-in-the-performance-index) model following theory. These routines generate linear control laws which cause the output of a dynamic time-invariant system to track the output of a prescribed model. In order to apply ORACLS, the user must write an executive (driver) program which inputs the problem coefficients, formulates and selects the routines to be used to solve the problem, and specifies the desired output. There are three versions of ORACLS source code available for implementation: CDC, IBM, and DEC. The CDC version has been implemented on a CDC 6000 series computer with a central memory of approximately 13K (octal) of 60 bit words. The CDC version is written in FORTRAN IV, was developed in 1978, and last updated in 1986. The IBM version has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The IBM version is written in FORTRAN IV and was generated in 1981. The DEC version has been implemented on a VAX series computer operating under VMS. The VAX version is written in FORTRAN 77 and was generated in 1986.

  1. High profile students’ growth of mathematical understanding in solving linier programing problems

    NASA Astrophysics Data System (ADS)

    Utomo; Kusmayadi, TA; Pramudya, I.

    2018-04-01

    Linear program has an important role in human’s life. This linear program is learned in senior high school and college levels. This material is applied in economy, transportation, military and others. Therefore, mastering linear program is useful for provision of life. This research describes a growth of mathematical understanding in solving linear programming problems based on the growth of understanding by the Piere-Kieren model. Thus, this research used qualitative approach. The subjects were students of grade XI in Salatiga city. The subjects of this study were two students who had high profiles. The researcher generally chose the subjects based on the growth of understanding from a test result in the classroom; the mark from the prerequisite material was ≥ 75. Both of the subjects were interviewed by the researcher to know the students’ growth of mathematical understanding in solving linear programming problems. The finding of this research showed that the subjects often folding back to the primitive knowing level to go forward to the next level. It happened because the subjects’ primitive understanding was not comprehensive.

  2. Linearized-moment analysis of the temperature jump and temperature defect in the Knudsen layer of a rarefied gas.

    PubMed

    Gu, Xiao-Jun; Emerson, David R

    2014-06-01

    Understanding the thermal behavior of a rarefied gas remains a fundamental problem. In the present study, we investigate the predictive capabilities of the regularized 13 and 26 moment equations. In this paper, we consider low-speed problems with small gradients, and to simplify the analysis, a linearized set of moment equations is derived to explore a classic temperature problem. Analytical solutions obtained for the linearized 26 moment equations are compared with available kinetic models and can reliably capture all qualitative trends for the temperature-jump coefficient and the associated temperature defect in the thermal Knudsen layer. In contrast, the linearized 13 moment equations lack the necessary physics to capture these effects and consistently underpredict kinetic theory. The deviation from kinetic theory for the 13 moment equations increases significantly for specular reflection of gas molecules, whereas the 26 moment equations compare well with results from kinetic theory. To improve engineering analyses, expressions for the effective thermal conductivity and Prandtl number in the Knudsen layer are derived with the linearized 26 moment equations.

  3. Optimal control problem for linear fractional-order systems, described by equations with Hadamard-type derivative

    NASA Astrophysics Data System (ADS)

    Postnov, Sergey

    2017-11-01

    Two kinds of optimal control problem are investigated for linear time-invariant fractional-order systems with lumped parameters which dynamics described by equations with Hadamard-type derivative: the problem of control with minimal norm and the problem of control with minimal time at given restriction on control norm. The problem setting with nonlocal initial conditions studied. Admissible controls allowed to be the p-integrable functions (p > 1) at half-interval. The optimal control problem studied by moment method. The correctness and solvability conditions for the corresponding moment problem are derived. For several special cases the optimal control problems stated are solved analytically. Some analogies pointed for results obtained with the results which are known for integer-order systems and fractional-order systems describing by equations with Caputo- and Riemann-Liouville-type derivatives.

  4. The principle of superposition and its application in ground-water hydraulics

    USGS Publications Warehouse

    Reilly, Thomas E.; Franke, O. Lehn; Bennett, Gordon D.

    1987-01-01

    The principle of superposition, a powerful mathematical technique for analyzing certain types of complex problems in many areas of science and technology, has important applications in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that problem solutions can be added together to obtain composite solutions. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to ground-water hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader.

  5. Guaranteed estimation of solutions to Helmholtz transmission problems with uncertain data from their indirect noisy observations

    NASA Astrophysics Data System (ADS)

    Podlipenko, Yu. K.; Shestopalov, Yu. V.

    2017-09-01

    We investigate the guaranteed estimation problem of linear functionals from solutions to transmission problems for the Helmholtz equation with inexact data. The right-hand sides of equations entering the statements of transmission problems and the statistical characteristics of observation errors are supposed to be unknown and belonging to certain sets. It is shown that the optimal linear mean square estimates of the above mentioned functionals and estimation errors are expressed via solutions to the systems of transmission problems of the special type. The results and techniques can be applied in the analysis and estimation of solution to forward and inverse electromagnetic and acoustic problems with uncertain data that arise in mathematical models of the wave diffraction on transparent bodies.

  6. Fleet Assignment Using Collective Intelligence

    NASA Technical Reports Server (NTRS)

    Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.

    2004-01-01

    Product distribution theory is a new collective intelligence-based framework for analyzing and controlling distributed systems. Its usefulness in distributed stochastic optimization is illustrated here through an airline fleet assignment problem. This problem involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of linear and non-linear constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of this new stochastic optimization algorithm to a non-linear objective cold start fleet assignment problem. Results show that the optimizer can successfully solve such highly-constrained problems (130 variables, 184 constraints).

  7. Cooperative global optimal preview tracking control of linear multi-agent systems: an internal model approach

    NASA Astrophysics Data System (ADS)

    Lu, Yanrong; Liao, Fucheng; Deng, Jiamei; Liu, Huiyang

    2017-09-01

    This paper investigates the cooperative global optimal preview tracking problem of linear multi-agent systems under the assumption that the output of a leader is a previewable periodic signal and the topology graph contains a directed spanning tree. First, a type of distributed internal model is introduced, and the cooperative preview tracking problem is converted to a global optimal regulation problem of an augmented system. Second, an optimal controller, which can guarantee the asymptotic stability of the augmented system, is obtained by means of the standard linear quadratic optimal preview control theory. Third, on the basis of proving the existence conditions of the controller, sufficient conditions are given for the original problem to be solvable, meanwhile a cooperative global optimal controller with error integral and preview compensation is derived. Finally, the validity of theoretical results is demonstrated by a numerical simulation.

  8. Piecewise linear approximation for hereditary control problems

    NASA Technical Reports Server (NTRS)

    Propst, Georg

    1987-01-01

    Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.

  9. The linear regulator problem for parabolic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Kunisch, K.

    1983-01-01

    An approximation framework is presented for computation (in finite imensional spaces) of Riccati operators that can be guaranteed to converge to the Riccati operator in feedback controls for abstract evolution systems in a Hilbert space. It is shown how these results may be used in the linear optimal regulator problem for a large class of parabolic systems.

  10. Investigating High-School Students' Reasoning Strategies when They Solve Linear Equations

    ERIC Educational Resources Information Center

    Huntley, Mary Ann; Marcus, Robin; Kahan, Jeremy; Miller, Jane Lincoln

    2007-01-01

    A cross-curricular structured-probe task-based clinical interview study with 44 pairs of third-year high-school mathematics students, most of whom were high achieving, was conducted to investigate their approaches to a variety of algebra problems. This paper presents results from one problem that involved solving a set of three linear equations of…

  11. Enriched Imperialist Competitive Algorithm for system identification of magneto-rheological dampers

    NASA Astrophysics Data System (ADS)

    Talatahari, Siamak; Rahbari, Nima Mohajer

    2015-10-01

    In the current research, the imperialist competitive algorithm is dramatically enhanced and a new optimization method dubbed as Enriched Imperialist Competitive Algorithm (EICA) is effectively introduced to deal with high non-linear optimization problems. To conduct a close examination of its functionality and efficacy, the proposed metaheuristic optimization approach is actively employed to sort out the parameter identification of two different types of hysteretic Bouc-Wen models which are simulating the non-linear behavior of MR dampers. Two types of experimental data are used for the optimization problems to minutely examine the robustness of the proposed EICA. The obtained results self-evidently demonstrate the high adaptability of EICA to suitably get to the bottom of such non-linear and hysteretic problems.

  12. Hybrid Genetic Agorithms and Line Search Method for Industrial Production Planning with Non-Linear Fitness Function

    NASA Astrophysics Data System (ADS)

    Vasant, Pandian; Barsoum, Nader

    2008-10-01

    Many engineering, science, information technology and management optimization problems can be considered as non linear programming real world problems where the all or some of the parameters and variables involved are uncertain in nature. These can only be quantified using intelligent computational techniques such as evolutionary computation and fuzzy logic. The main objective of this research paper is to solve non linear fuzzy optimization problem where the technological coefficient in the constraints involved are fuzzy numbers which was represented by logistic membership functions by using hybrid evolutionary optimization approach. To explore the applicability of the present study a numerical example is considered to determine the production planning for the decision variables and profit of the company.

  13. An Obstruction to the Integrability of a Class of Non-linear Wave Equations by 1-Stable Cartan Characteristics

    NASA Astrophysics Data System (ADS)

    Fackerell, E. D.; Hartley, D.; Tucker, R. W.

    We examine in detail the Cauchy problem for a class of non-linear hyperbolic equations in two independent variables. This class is motivated by the analysis of the dynamics of a line of non-linearly coupled particles by Fermi, Pasta, and Ulam and extends the recent investigation of this problem by Gardner and Kamran. We find conditions for the existence of a 1-stable Cartan characteristic of a Pfaffian exterior differential system whose integral curves provide a solution to the Cauchy problem. The same obstruction to involution is exposed in Darboux's method of integration and the two approaches are compared. A class of particular solutions to the obstruction is constructed.

  14. Application of a stochastic inverse to the geophysical inverse problem

    NASA Technical Reports Server (NTRS)

    Jordan, T. H.; Minster, J. B.

    1972-01-01

    The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.

  15. On mathematical modelling of aeroelastic problems with finite element method

    NASA Astrophysics Data System (ADS)

    Sváček, Petr

    2018-06-01

    This paper is interested in solution of two-dimensional aeroelastic problems. Two mathematical models are compared for a benchmark problem. First, the classical approach of linearized aerodynamical forces is described to determine the aeroelastic instability and the aeroelastic response in terms of frequency and damping coefficient. This approach is compared to the coupled fluid-structure model solved with the aid of finite element method used for approximation of the incompressible Navier-Stokes equations. The finite element approximations are coupled to the non-linear motion equations of a flexibly supported airfoil. Both methods are first compared for the case of small displacement, where the linearized approach can be well adopted. The influence of nonlinearities for the case of post-critical regime is discussed.

  16. An Improved Search Approach for Solving Non-Convex Mixed-Integer Non Linear Programming Problems

    NASA Astrophysics Data System (ADS)

    Sitopu, Joni Wilson; Mawengkang, Herman; Syafitri Lubis, Riri

    2018-01-01

    The nonlinear mathematical programming problem addressed in this paper has a structure characterized by a subset of variables restricted to assume discrete values, which are linear and separable from the continuous variables. The strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method, has been developed. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points. Successful implementation of these algorithms was achieved on various test problems.

  17. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  18. On the Problems of Construction and Statistical Inference Associated with a Generalization of Canonical Variables.

    DTIC Science & Technology

    1982-02-01

    of them are pre- sented in this paper. As an application, important practical problems similar to the one posed by Gnanadesikan (1977), p. 77 can be... Gnanadesikan and Wilk (1969) to search for a non-linear combination, giving rise to non-linear first principal component. So, a p-dinensional vector can...distribution, Gnanadesikan and Gupta (1970) and earlier Eaton (1967) have considered the problem of ranking the r underlying populations according to the

  19. Proceedings of the Tenth Annual National Conference on Ada Technology. Held in Arlington, VA, on February 24-28, 1992

    DTIC Science & Technology

    1992-02-01

    Newsletter, Vol. 5, No. 1, January 1983 be translated from HAL’S. 4. Klumpp, Allan R., An Ada Linear Algebra Software development costs for using the...a linear algebra approach to As noted above, the concept of the problem and address the problem of unitdimensional analysis extends beyond problems...you will join us again next year. The 11th Annual Conference on Ada Technology (1993) will be held here at the Hyatt Regency - Crystal City

  20. Graph cuts via l1 norm minimization.

    PubMed

    Bhusnurmath, Arvind; Taylor, Camillo J

    2008-10-01

    Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.

  1. Constructing Identities in Multicultural Learning Contexts

    ERIC Educational Resources Information Center

    Crafter, Sarah; de Abreu, Guida

    2010-01-01

    In this article we examine two concepts that aid our understanding of processes of identification in multiethnic schools. The first concept focuses on the complementarity of "three processes of identity" (identifying the other, being identified, and self-identification). This is brought together with the concept of sociocultural coupling…

  2. Formative Assessment Probes: Pendulums and Crooked Swings--Connecting Science and Engineering

    ERIC Educational Resources Information Center

    Keeley, Page

    2013-01-01

    The "Next Generation Science Standards" provide opportunities for students to experience the link between science and engineering. In the December 2011 issue of "Science and Children," Rodger Bybee explains: "The relationship between science and engineering practices is one of complementarity. Given the inclusion of…

  3. Complementarity of statistical treatments to reconstruct worldwide routes of invasion: The case of the Asian ladybird Harmonia axyridis

    USDA-ARS?s Scientific Manuscript database

    Technical Abstract. Molecular markers can provide clear insight into the introduction history of invasive species. However, inferences about recent introduction histories remain challenging, because of the stochastic demographic processes often involved. Approximate Bayesian computation (ABC) can he...

  4. The Riesz-Radon-Fréchet problem of characterization of integrals

    NASA Astrophysics Data System (ADS)

    Zakharov, Valerii K.; Mikhalev, Aleksandr V.; Rodionov, Timofey V.

    2010-11-01

    This paper is a survey of results on characterizing integrals as linear functionals. It starts from the familiar result of F. Riesz (1909) on integral representation of bounded linear functionals by Riemann-Stieltjes integrals on a closed interval, and is directly connected with Radon's famous theorem (1913) on integral representation of bounded linear functionals by Lebesgue integrals on a compact subset of {R}^n. After the works of Radon, Fréchet, and Hausdorff, the problem of characterizing integrals as linear functionals took the particular form of the problem of extending Radon's theorem from {R}^n to more general topological spaces with Radon measures. This problem turned out to be difficult, and its solution has a long and rich history. Therefore, it is natural to call it the Riesz-Radon-Fréchet problem of characterization of integrals. Important stages of its solution are associated with such eminent mathematicians as Banach (1937-1938), Saks (1937-1938), Kakutani (1941), Halmos (1950), Hewitt (1952), Edwards (1953), Prokhorov (1956), Bourbaki (1969), and others. Essential ideas and technical tools were developed by A.D. Alexandrov (1940-1943), Stone (1948-1949), Fremlin (1974), and others. Most of this paper is devoted to the contemporary stage of the solution of the problem, connected with papers of König (1995-2008), Zakharov and Mikhalev (1997-2009), and others. The general solution of the problem is presented in the form of a parametric theorem on characterization of integrals which directly implies the characterization theorems of the indicated authors. Bibliography: 60 titles.

  5. An analysis of hypercritical states in elastic and inelastic systems

    NASA Astrophysics Data System (ADS)

    Kowalczk, Maciej

    The author raises a wide range of problems whose common characteristic is an analysis of hypercritical states in elastic and inelastic systems. the article consists of two basic parts. The first part primarily discusses problems of modelling hypercritical states, while the second analyzes numerical methods (so-called continuation methods) used to solve non-linear problems. The original approaches for modelling hypercritical states found in this article include the combination of plasticity theory and an energy condition for cracking, accounting for the variability and cyclical nature of the forms of fracture of a brittle material under a die, and the combination of plasticity theory and a simplified description of the phenomenon of localization along a discontinuity line. The author presents analytical solutions of three non-linear problems for systems made of elastic/brittle/plastic and elastic/ideally plastic materials. The author proceeds to discuss the analytical basics of continuation methods and analyzes the significance of the parameterization of non-linear problems, provides a method for selecting control parameters based on an analysis of the rank of a rectangular matrix of a uniform system of increment equations, and also provides a new method for selecting an equilibrium path originating from a bifurcation point. The author provides a general outline of continuation methods based on an analysis of the rank of a matrix of a corrective system of equations. The author supplements his theoretical solutions with numerical solutions of non-linear problems for rod systems and problems of the plastic disintegration of a notched rectangular plastic plate.

  6. Stability of Linear Equations--Algebraic Approach

    ERIC Educational Resources Information Center

    Cherif, Chokri; Goldstein, Avraham; Prado, Lucio M. G.

    2012-01-01

    This article could be of interest to teachers of applied mathematics as well as to people who are interested in applications of linear algebra. We give a comprehensive study of linear systems from an application point of view. Specifically, we give an overview of linear systems and problems that can occur with the computed solution when the…

  7. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    ERIC Educational Resources Information Center

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  8. A feedback linearization approach to spacecraft control using momentum exchange devices. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Dzielski, John Edward

    1988-01-01

    Recent developments in the area of nonlinear control theory have shown how coordiante changes in the state and input spaces can be used with nonlinear feedback to transform certain nonlinear ordinary differential equations into equivalent linear equations. These feedback linearization techniques are applied to resolve two problems arising in the control of spacecraft equipped with control moment gyroscopes (CMGs). The first application involves the computation of rate commands for the gimbals that rotate the individual gyroscopes to produce commanded torques on the spacecraft. The second application is to the long-term management of stored momentum in the system of control moment gyroscopes using environmental torques acting on the vehicle. An approach to distributing control effort among a group of redundant actuators is described that uses feedback linearization techniques to parameterize sets of controls which influence a specified subsystem in a desired way. The approach is adapted for use in spacecraft control with double-gimballed gyroscopes to produce an algorithm that avoids problematic gimbal configurations by approximating sets of gimbal rates that drive CMG rotors into desirable configurations. The momentum management problem is stated as a trajectory optimization problem with a nonlinear dynamical constraint. Feedback linearization and collocation are used to transform this problem into an unconstrainted nonlinear program. The approach to trajectory optimization is fast and robust. A number of examples are presented showing applications to the proposed NASA space station.

  9. The Effects of the Concrete-Representational-Abstract Integration Strategy on the Ability of Students with Learning Disabilities to Multiply Linear Expressions within Area Problems

    ERIC Educational Resources Information Center

    Strickland, Tricia K.; Maccini, Paula

    2013-01-01

    We examined the effects of the Concrete-Representational-Abstract Integration strategy on the ability of secondary students with learning disabilities to multiply linear algebraic expressions embedded within contextualized area problems. A multiple-probe design across three participants was used. Results indicated that the integration of the…

  10. A Quantitative and Combinatorial Approach to Non-Linear Meanings of Multiplication

    ERIC Educational Resources Information Center

    Tillema, Erik; Gatza, Andrew

    2016-01-01

    We provide a conceptual analysis of how combinatorics problems have the potential to support students to establish non-linear meanings of multiplication (NLMM). The problems we analyze we have used in a series of studies with 6th, 8th, and 10th grade students. We situate the analysis in prior work on students' quantitative and multiplicative…

  11. On some Aitken-like acceleration of the Schwarz method

    NASA Astrophysics Data System (ADS)

    Garbey, M.; Tromeur-Dervout, D.

    2002-12-01

    In this paper we present a family of domain decomposition based on Aitken-like acceleration of the Schwarz method seen as an iterative procedure with a linear rate of convergence. We first present the so-called Aitken-Schwarz procedure for linear differential operators. The solver can be a direct solver when applied to the Helmholtz problem with five-point finite difference scheme on regular grids. We then introduce the Steffensen-Schwarz variant which is an iterative domain decomposition solver that can be applied to linear and nonlinear problems. We show that these solvers have reasonable numerical efficiency compared to classical fast solvers for the Poisson problem or multigrids for more general linear and nonlinear elliptic problems. However, the salient feature of our method is that our algorithm has high tolerance to slow network in the context of distributed parallel computing and is attractive, generally speaking, to use with computer architecture for which performance is limited by the memory bandwidth rather than the flop performance of the CPU. This is nowadays the case for most parallel. computer using the RISC processor architecture. We will illustrate this highly desirable property of our algorithm with large-scale computing experiments.

  12. Bounding solutions of geometrically nonlinear viscoelastic problems

    NASA Technical Reports Server (NTRS)

    Stubstad, J. M.; Simitses, G. J.

    1985-01-01

    Integral transform techniques, such as the Laplace transform, provide simple and direct methods for solving viscoelastic problems formulated within a context of linear material response and using linear measures for deformation. Application of the transform operator reduces the governing linear integro-differential equations to a set of algebraic relations between the transforms of the unknown functions, the viscoelastic operators, and the initial and boundary conditions. Inversion either directly or through the use of the appropriate convolution theorem, provides the time domain response once the unknown functions have been expressed in terms of sums, products or ratios of known transforms. When exact inversion is not possible approximate techniques may provide accurate results. The overall problem becomes substantially more complex when nonlinear effects must be included. Situations where a linear material constitutive law can still be productively employed but where the magnitude of the resulting time dependent deformations warrants the use of a nonlinear kinematic analysis are considered. The governing equations will be nonlinear integro-differential equations for this class of problems. Thus traditional as well as approximate techniques, such as cited above, cannot be employed since the transform of a nonlinear function is not explicitly expressible.

  13. Bounding solutions of geometrically nonlinear viscoelastic problems

    NASA Technical Reports Server (NTRS)

    Stubstad, J. M.; Simitses, G. J.

    1986-01-01

    Integral transform techniques, such as the Laplace transform, provide simple and direct methods for solving viscoelastic problems formulated within a context of linear material response and using linear measures for deformation. Application of the transform operator reduces the governing linear integro-differential equations to a set of algebraic relations between the transforms of the unknown functions, the viscoelastic operators, and the initial and boundary conditions. Inversion either directly or through the use of the appropriate convolution theorem, provides the time domain response once the unknown functions have been expressed in terms of sums, products or ratios of known transforms. When exact inversion is not possible approximate techniques may provide accurate results. The overall problem becomes substantially more complex when nonlinear effects must be included. Situations where a linear material constitutive law can still be productively employed but where the magnitude of the resulting time dependent deformations warrants the use of a nonlinear kinematic analysis are considered. The governing equations will be nonlinear integro-differential equations for this class of problems. Thus traditional as well as approximate techniques, such as cited above, cannot be employed since the transform of a nonlinear function is not explicitly expressible.

  14. Hash Bit Selection for Nearest Neighbor Search.

    PubMed

    Xianglong Liu; Junfeng He; Shih-Fu Chang

    2017-11-01

    To overcome the barrier of storage and computation when dealing with gigantic-scale data sets, compact hashing has been studied extensively to approximate the nearest neighbor search. Despite the recent advances, critical design issues remain open in how to select the right features, hashing algorithms, and/or parameter settings. In this paper, we address these by posing an optimal hash bit selection problem, in which an optimal subset of hash bits are selected from a pool of candidate bits generated by different features, algorithms, or parameters. Inspired by the optimization criteria used in existing hashing algorithms, we adopt the bit reliability and their complementarity as the selection criteria that can be carefully tailored for hashing performance in different tasks. Then, the bit selection solution is discovered by finding the best tradeoff between search accuracy and time using a modified dynamic programming method. To further reduce the computational complexity, we employ the pairwise relationship among hash bits to approximate the high-order independence property, and formulate it as an efficient quadratic programming method that is theoretically equivalent to the normalized dominant set problem in a vertex- and edge-weighted graph. Extensive large-scale experiments have been conducted under several important application scenarios of hash techniques, where our bit selection framework can achieve superior performance over both the naive selection methods and the state-of-the-art hashing algorithms, with significant accuracy gains ranging from 10% to 50%, relatively.

  15. Problems of classification in the family Paramyxoviridae.

    PubMed

    Rima, Bert; Collins, Peter; Easton, Andrew; Fouchier, Ron; Kurath, Gael; Lamb, Robert A; Lee, Benhur; Maisner, Andrea; Rota, Paul; Wang, Lin-Fa

    2018-05-01

    A number of unassigned viruses in the family Paramyxoviridae need to be classified either as a new genus or placed into one of the seven genera currently recognized in this family. Furthermore, numerous new paramyxoviruses continue to be discovered. However, attempts at classification have highlighted the difficulties that arise by applying historic criteria or criteria based on sequence alone to the classification of the viruses in this family. While the recent taxonomic change that elevated the previous subfamily Pneumovirinae into a separate family Pneumoviridae is readily justified on the basis of RNA dependent -RNA polymerase (RdRp or L protein) sequence motifs, using RdRp sequence comparisons for assignment to lower level taxa raises problems that would require an overhaul of the current criteria for assignment into genera in the family Paramyxoviridae. Arbitrary cut off points to delineate genera and species would have to be set if classification was based on the amino acid sequence of the RdRp alone or on pairwise analysis of sequence complementarity (PASC) of all open reading frames (ORFs). While these cut-offs cannot be made consistent with the current classification in this family, resorting to genus-level demarcation criteria with additional input from the biological context may afford a way forward. Such criteria would reflect the increasingly dynamic nature of virus taxonomy even if it would require a complete revision of the current classification.

  16. Quantum-like Modeling of Cognition

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei

    2015-09-01

    This paper begins with a historical review of the mutual influence of physics and psychology, from Freud's invention of psychic energy inspired by von Boltzmann' thermodynamics to the enrichment quantum physics gained from the side of psychology by the notion of complementarity (the invention of Niels Bohr who was inspired by William James), besides we consider the resonance of the correspondence between Wolfgang Pauli and Carl Jung in both physics and psychology. Then we turn to the problem of development of mathematical models for laws of thought starting with Boolean logic and progressing towards foundations of classical probability theory. Interestingly, the laws of classical logic and probability are routinely violated not only by quantum statistical phenomena but by cognitive phenomena as well. This is yet another common feature between quantum physics and psychology. In particular, cognitive data can exhibit a kind of the probabilistic interference effect. This similarity with quantum physics convinced a multi-disciplinary group of scientists (physicists, psychologists, economists, sociologists) to apply the mathematical apparatus of quantum mechanics to modeling of cognition. We illustrate this activity by considering a few concrete phenomena: the order and disjunction effects, recognition of ambiguous figures, categorization-decision making. In Appendix 1 we briefly present essentials of theory of contextual probability and a method of representations of contextual probabilities by complex probability amplitudes (solution of the ``inverse Born's problem'') based on a quantum-like representation algorithm (QLRA).

  17. ELAS: A general-purpose computer program for the equilibrium problems of linear structures. Volume 2: Documentation of the program. [subroutines and flow charts

    NASA Technical Reports Server (NTRS)

    Utku, S.

    1969-01-01

    A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.

  18. Analysis of Slope Limiters on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Berger, Marsha; Aftosmis, Michael J.

    2005-01-01

    This paper examines the behavior of flux and slope limiters on non-uniform grids in multiple dimensions. Many slope limiters in standard use do not preserve linear solutions on irregular grids impacting both accuracy and convergence. We rewrite some well-known limiters to highlight their underlying symmetry, and use this form to examine the proper - ties of both traditional and novel limiter formulations on non-uniform meshes. A consistent method of handling stretched meshes is developed which is both linearity preserving for arbitrary mesh stretchings and reduces to common limiters on uniform meshes. In multiple dimensions we analyze the monotonicity region of the gradient vector and show that the multidimensional limiting problem may be cast as the solution of a linear programming problem. For some special cases we present a new directional limiting formulation that preserves linear solutions in multiple dimensions on irregular grids. Computational results using model problems and complex three-dimensional examples are presented, demonstrating accuracy, monotonicity and robustness.

  19. Linking Research and Teaching: Context, Conflict and Complementarity

    ERIC Educational Resources Information Center

    Pan, Wei; Cotton, Debby; Murray, Paul

    2014-01-01

    Although research and teaching have often been regarded as complementary in enhancing the quality of student learning, little previous research has explored the conflicts associated with linking the two activities. This paper aims to examine specific issues arising within the environmental building disciplines at a UK university, and to explore…

  20. Criterion-Referenced and Norm-Referenced Assessments: Compatibility and Complementarity

    ERIC Educational Resources Information Center

    Lok, Beatrice; McNaught, Carmel; Young, Kenneth

    2016-01-01

    The tension between criterion-referenced and norm-referenced assessment is examined in the context of curriculum planning and assessment in outcomes-based approaches to higher education. This paper argues the importance of a criterion-referenced assessment approach once an outcomes-based approach has been adopted. It further discusses the…

  1. Health and Schooling: Evidence and Policy Implications for Developing Countries.

    ERIC Educational Resources Information Center

    Gomes-Neto, Joao Batista; And Others

    1997-01-01

    Exploits a unique data set (EDRURAL) from three northeastern states of Brazil to investigate the complementarities of health with school attainment and cognitive achievement. The promotion models and value-added achievement models demonstrate the value of students' visual acuity. Achievement models highlight the role of good nutrition. Eye…

  2. Compound Complementarities in the Study of Motivated Behavior.

    ERIC Educational Resources Information Center

    Teitelbaum, Philip; Stricker, Edward M.

    1994-01-01

    The 1954 article by Eliot Stellar provided the theoretical focus for a great deal of research on the biological bases of human behavior. Future attention to the infrastructure of behaviors being studied, combined with reductionistic studies of neurons, will fulfill the potential contribution to behavioral neuroscience implicit in Stellar's…

  3. Inclusive Education--A Christian Perspective to an "Overlapping Consensus"

    ERIC Educational Resources Information Center

    Pirner, Manfred L.

    2015-01-01

    The UN Convention on the Rights of Persons with Disabilities has triggered endeavours in many countries to implement inclusive education at public schools. A Christian interpretation that concentrates on the anthropogical themes of fragmentarity, fragility and complementarity offers valuable impulses to the public discourse on inclusive education,…

  4. Racial Interaction Effects and Student Achievement

    ERIC Educational Resources Information Center

    Penney, Jeffrey

    2017-01-01

    Previous research has found that students who are of the same race as their teacher tend to perform better academically. This paper examines the possibility that both dosage and timing matter for these racial complementarities. Using a model of education production that explicitly accounts for past observable inputs, a conditional…

  5. Interpersonal Attraction and Machiavellianism: A Study of Roommate Pairs.

    ERIC Educational Resources Information Center

    Riedel, Marc; Thew, Karen

    The study attempts to test hypotheses derived from the model of interpersonal attraction suggested by Kerckhoff and Davis, who investigated the issue of need complementarity versus similarity in their longitudinal research upon couples who were engaged or otherwise seriously attached and who proposed that homogamy in social attributes is…

  6. Integrating PCR Theory and Bioinformatics into a Research-oriented Primer Design Exercise

    ERIC Educational Resources Information Center

    Robertson, Amber L.; Phillips, Allison R.

    2008-01-01

    Polymerase chain reaction (PCR) is a conceptually difficult technique that embodies many fundamental biological processes. Traditionally, students have struggled to analyze PCR results due to an incomplete understanding of the biological concepts (theory) of DNA replication and strand complementarity. Here we describe the design of a novel…

  7. The Person Approach: Concepts, Measurement Models, and Research Strategy

    ERIC Educational Resources Information Center

    Magnusson, David

    2003-01-01

    This chapter discusses the "person approach" to studying developmental processes by focusing on the distinction and complementarity between this holistic-interactionistic framework and what has become designated as the variable approach. Particular attention is given to measurement models for use in the person approach. The discussion on the…

  8. Perspektiven der angewandten Linguistik (Perspectives in Applied Linguistics).

    ERIC Educational Resources Information Center

    Watts, Richard J., Ed.; Werlen, Iwar, Ed.

    1995-01-01

    Articles in this issue include: "Complementarite et concurrence des politiques linguistiques au Canada: Le choix du medium d'instruction au Quebec et en Ontario" (The Complementarity and Competition of Language Policies in Canada: The Choice of Medium of Instruction in Quebec and Ontario) (Normand Labrie); "Presentation de la…

  9. Radical Behaviorism and Buddhism: Complementarities and Conflicts

    ERIC Educational Resources Information Center

    Diller, James W.; Lattal, Kennon A.

    2008-01-01

    Comparisons have been made between Buddhism and the philosophy of science in general, but there have been only a few attempts to draw comparisons directly with the philosophy of radical behaviorism. The present review therefore considers heretofore unconsidered points of comparison between Buddhism and radical behaviorism in terms of their…

  10. Compressed modes for variational problems in mathematics and physics

    PubMed Central

    Ozoliņš, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2013-01-01

    This article describes a general formalism for obtaining spatially localized (“sparse”) solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger’s equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support (“compressed modes”). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. PMID:24170861

  11. Compressed modes for variational problems in mathematics and physics.

    PubMed

    Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2013-11-12

    This article describes a general formalism for obtaining spatially localized ("sparse") solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support ("compressed modes"). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size.

  12. A Cross-Domain Collaborative Filtering Algorithm Based on Feature Construction and Locally Weighted Linear Regression

    PubMed Central

    Jiang, Feng; Han, Ji-zhong

    2018-01-01

    Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods. PMID:29623088

  13. A Cross-Domain Collaborative Filtering Algorithm Based on Feature Construction and Locally Weighted Linear Regression.

    PubMed

    Yu, Xu; Lin, Jun-Yu; Jiang, Feng; Du, Jun-Wei; Han, Ji-Zhong

    2018-01-01

    Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.

  14. Robust L1-norm two-dimensional linear discriminant analysis.

    PubMed

    Li, Chun-Na; Shao, Yuan-Hai; Deng, Nai-Yang

    2015-05-01

    In this paper, we propose an L1-norm two-dimensional linear discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional linear discriminant analysis with L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Variational finite-difference methods in linear and nonlinear problems of the deformation of metallic and composite shells (review)

    NASA Astrophysics Data System (ADS)

    Maksimyuk, V. A.; Storozhuk, E. A.; Chernyshenko, I. S.

    2012-11-01

    Variational finite-difference methods of solving linear and nonlinear problems for thin and nonthin shells (plates) made of homogeneous isotropic (metallic) and orthotropic (composite) materials are analyzed and their classification principles and structure are discussed. Scalar and vector variational finite-difference methods that implement the Kirchhoff-Love hypotheses analytically or algorithmically using Lagrange multipliers are outlined. The Timoshenko hypotheses are implemented in a traditional way, i.e., analytically. The stress-strain state of metallic and composite shells of complex geometry is analyzed numerically. The numerical results are presented in the form of graphs and tables and used to assess the efficiency of using the variational finite-difference methods to solve linear and nonlinear problems of the statics of shells (plates)

  16. A new formulation for anisotropic radiative transfer problems. I - Solution with a variational technique

    NASA Technical Reports Server (NTRS)

    Cheyney, H., III; Arking, A.

    1976-01-01

    The equations of radiative transfer in anisotropically scattering media are reformulated as linear operator equations in a single independent variable. The resulting equations are suitable for solution by a variety of standard mathematical techniques. The operators appearing in the resulting equations are in general nonsymmetric; however, it is shown that every bounded linear operator equation can be embedded in a symmetric linear operator equation and a variational solution can be obtained in a straightforward way. For purposes of demonstration, a Rayleigh-Ritz variational method is applied to three problems involving simple phase functions. It is to be noted that the variational technique demonstrated is of general applicability and permits simple solutions for a wide range of otherwise difficult mathematical problems in physics.

  17. Numerical Study of Frictional Properties and the Role of Cohesive End-Zones in Large Strike- Slip Earthquakes

    NASA Astrophysics Data System (ADS)

    Lovely, P. J.; Mutlu, O.; Pollard, D. D.

    2007-12-01

    Cohesive end-zones (CEZs) are regions of increased frictional strength and/or cohesion near the peripheries of faults that cause slip distributions to taper toward the fault-tip. Laboratory results, field observations, and theoretical models suggest an important role for CEZs in small-scale fractures and faults; however, their role in crustal-scale faulting and associated large earthquakes is less thoroughly understood. We present a numerical study of the potential role of CEZs on slip distributions in large, multi-segmented, strike-slip earthquake ruptures including the 1992 Landers Earthquake (Mw 7.2) and 1999 Hector Mine Earthquake (Mw 7.1). Displacement discontinuity is calculated using a quasi-static, 2D plane-strain boundary element (BEM) code for a homogeneous, isotropic, linear-elastic material. Friction is implemented by enforcing principles of complementarity. Model results with and without CEZs are compared with slip distributions measured by combined inversion of geodetic, strong ground motion, and teleseismic data. Stepwise and linear distributions of increasing frictional strength within CEZs are considered. The incorporation of CEZs in our model enables an improved match to slip distributions measured by inversion, suggesting that CEZs play a role in governing slip in large, strike-slip earthquakes. Additionally, we present a parametric study highlighting the very great sensitivity of modeled slip magnitude to small variations of the coefficient of friction. This result suggests that, provided a sufficiently well-constrained stress tensor and elastic moduli for the surrounding rock, relatively simple models could provide precise estimates of the magnitude of frictional strength. These results are verified by comparison with geometrically comparable finite element (FEM) models using the commercial code ABAQUS. In FEM models, friction is implemented by use of both Lagrange multipliers and penalty methods.

  18. Numerical solution of system of boundary value problems using B-spline with free parameter

    NASA Astrophysics Data System (ADS)

    Gupta, Yogesh

    2017-01-01

    This paper deals with method of B-spline solution for a system of boundary value problems. The differential equations are useful in various fields of science and engineering. Some interesting real life problems involve more than one unknown function. These result in system of simultaneous differential equations. Such systems have been applied to many problems in mathematics, physics, engineering etc. In present paper, B-spline and B-spline with free parameter methods for the solution of a linear system of second-order boundary value problems are presented. The methods utilize the values of cubic B-spline and its derivatives at nodal points together with the equations of the given system and boundary conditions, ensuing into the linear matrix equation.

  19. A Mixed-Integer Linear Programming Problem which is Efficiently Solvable.

    DTIC Science & Technology

    1987-10-01

    INTEGER LINEAR PROGRAMMING PROBLEM WHICH IS EFFICIENTLY SOLVABLE 12. PERSONAL AUTHOR(S) Leiserson, Charles, and Saxe, James B. 13a. TYPE OF REPORT j13b TIME...ger prongramn rg versions or the problem is not ac’hievable in genieral for sparse inistancves of’ P rolem(r Mi. Th le remrai nder or thris paper is...rClazes c:oIh edge (i,I*) by comlpli urg +- rnirr(z 3, ,x + a,j). A sirnI) le analysis (11 vto Nei [131 indicates why whe Iellinan-Ford algorithm works

  20. Feedback linearization of singularly perturbed systems based on canonical similarity transformations

    NASA Astrophysics Data System (ADS)

    Kabanov, A. A.

    2018-05-01

    This paper discusses the problem of feedback linearization of a singularly perturbed system in a state-dependent coefficient form. The result is based on the introduction of a canonical similarity transformation. The transformation matrix is constructed from separate blocks for fast and slow part of an original singularly perturbed system. The transformed singular perturbed system has a linear canonical form that significantly simplifies a control design problem. Proposed similarity transformation allows accomplishing linearization of the system without considering the virtual output (as it is needed for normal form method), a technique of a transition from phase coordinates of the transformed system to state variables of the original system is simpler. The application of the proposed approach is illustrated through example.

  1. Algorithms for sorting unsigned linear genomes by the DCJ operations.

    PubMed

    Jiang, Haitao; Zhu, Binhai; Zhu, Daming

    2011-02-01

    The double cut and join operation (abbreviated as DCJ) has been extensively used for genomic rearrangement. Although the DCJ distance between signed genomes with both linear and circular (uni- and multi-) chromosomes is well studied, the only known result for the NP-complete unsigned DCJ distance problem is an approximation algorithm for unsigned linear unichromosomal genomes. In this article, we study the problem of computing the DCJ distance on two unsigned linear multichromosomal genomes (abbreviated as UDCJ). We devise a 1.5-approximation algorithm for UDCJ by exploiting the distance formula for signed genomes. In addition, we show that UDCJ admits a weak kernel of size 2k and hence an FPT algorithm running in O(2(2k)n) time.

  2. A reciprocal theorem for a mixture theory. [development of linearized theory of interacting media

    NASA Technical Reports Server (NTRS)

    Martin, C. J.; Lee, Y. M.

    1972-01-01

    A dynamic reciprocal theorem for a linearized theory of interacting media is developed. The constituents of the mixture are a linear elastic solid and a linearly viscous fluid. In addition to Steel's field equations, boundary conditions and inequalities on the material constants that have been shown by Atkin, Chadwick and Steel to be sufficient to guarantee uniqueness of solution to initial-boundary value problems are used. The elements of the theory are given and two different boundary value problems are considered. The reciprocal theorem is derived with the aid of the Laplace transform and the divergence theorem and this section is concluded with a discussion of the special cases which arise when one of the constituents of the mixture is absent.

  3. Correction for spatial averaging in laser speckle contrast analysis

    PubMed Central

    Thompson, Oliver; Andrews, Michael; Hirst, Evan

    2011-01-01

    Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623

  4. The numerical solution of linear multi-term fractional differential equations: systems of equations

    NASA Astrophysics Data System (ADS)

    Edwards, John T.; Ford, Neville J.; Simpson, A. Charles

    2002-11-01

    In this paper, we show how the numerical approximation of the solution of a linear multi-term fractional differential equation can be calculated by reduction of the problem to a system of ordinary and fractional differential equations each of order at most unity. We begin by showing how our method applies to a simple class of problems and we give a convergence result. We solve the Bagley Torvik equation as an example. We show how the method can be applied to a general linear multi-term equation and give two further examples.

  5. Model checking for linear temporal logic: An efficient implementation

    NASA Technical Reports Server (NTRS)

    Sherman, Rivi; Pnueli, Amir

    1990-01-01

    This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms.

  6. Lyapunov stability and its application to systems of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Kennedy, E. W.

    1979-01-01

    An outline and a brief introduction to some of the concepts and implications of Lyapunov stability theory are presented. Various aspects of the theory are illustrated by the inclusion of eight examples, including the Cartesian coordinate equations of the two-body problem, linear and nonlinear (Van der Pol's equation) oscillatory systems, and the linearized Kustaanheimo-Stiefel element equations for the unperturbed two-body problem.

  7. Some Issues about the Introduction of First Concepts in Linear Algebra during Tutorial Sessions at the Beginning of University

    ERIC Educational Resources Information Center

    Grenier-Boley, Nicolas

    2014-01-01

    Certain mathematical concepts were not introduced to solve a specific open problem but rather to solve different problems with the same tools in an economic formal way or to unify several approaches: such concepts, as some of those of linear algebra, are presumably difficult to introduce to students as they are potentially interwoven with many…

  8. Extended Decentralized Linear-Quadratic-Gaussian Control

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2000-01-01

    A straightforward extension of a solution to the decentralized linear-Quadratic-Gaussian problem is proposed that allows its use for commonly encountered classes of problems that are currently solved with the extended Kalman filter. This extension allows the system to be partitioned in such a way as to exclude the nonlinearities from the essential algebraic relationships that allow the estimation and control to be optimally decentralized.

  9. An automated system for reduction of the firm's employees under maximal overall efficiency

    NASA Astrophysics Data System (ADS)

    Yonchev, Yoncho; Nikolov, Simeon; Baeva, Silvia

    2012-11-01

    Achieving maximal overall efficiency is a priority in all companies. This problem is formulated as a knap-sack problem and afterwards as a linear assignment problem. An automated system is created for solving of this problem.

  10. Analysis of junior high school students' attempt to solve a linear inequality problem

    NASA Astrophysics Data System (ADS)

    Taqiyuddin, Muhammad; Sumiaty, Encum; Jupri, Al

    2017-08-01

    Linear inequality is one of fundamental subjects within junior high school mathematics curricula. Several studies have been conducted to asses students' perform on linear inequality. However, it can hardly be found that linear inequality problems are in the form of "ax + b < dx + e" with "a, d ≠ 0", and "a ≠ d" as it can be seen on the textbook used by Indonesian students and several studies. This condition leads to the research questions concerning students' attempt on solving a simple linear inequality problem in this form. In order to do so, the written test was administered to 58 students from two schools in Bandung followed by interviews. The other sources of the data are from teachers' interview and mathematics books used by students. After that, the constant comparative method was used to analyse the data. The result shows that the majority approached the question by doing algebraic operations. Interestingly, most of them did it incorrectly. In contrast, algebraic operations were correctly used by some of them. Moreover, the others performed expected-numbers solution, rewriting the question, translating the inequality into words, and blank answer. Furthermore, we found that there is no one who was conscious of the existence of all-numbers solution. It was found that this condition is reasonably due to how little the learning components concern about why a procedure of solving a linear inequality works and possibilities of linear inequality solution.

  11. Individualized Math Problems in Simple Equations. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this volume require solution of linear equations, systems…

  12. Integrating PCR theory and bioinformatics into a research-oriented primer design exercise.

    PubMed

    Robertson, Amber L; Phillips, Allison R

    2008-01-01

    Polymerase chain reaction (PCR) is a conceptually difficult technique that embodies many fundamental biological processes. Traditionally, students have struggled to analyze PCR results due to an incomplete understanding of the biological concepts (theory) of DNA replication and strand complementarity. Here we describe the design of a novel research-oriented exercise that prepares students to design DNA primers for PCR. Our exercise design includes broad and specific learning goals and assessments of student performance and perceptions. We developed this interactive Primer Design Exercise using the principles of scientific teaching to enhance student understanding of the theory behind PCR and provide practice in designing PCR primers to amplify DNA. In the end, the students were more poised to troubleshoot problems that arose in real experiments using PCR. In addition, students had the opportunity to utilize several bioinformatics tools to gain an increased understanding of primer quality, directionality, and specificity. In the course of this study many misconceptions about DNA replication during PCR and the need for primer specificity were identified and addressed. Students were receptive to the new materials and the majority achieved the learning goals.

  13. "It's What We Use as a Community": Exploring Students' STEM Characterizations In Two Montessori Elementary Classrooms

    NASA Astrophysics Data System (ADS)

    Szostkowski, Alaina Hopkins

    Integrated science, technology, engineering, and mathematics (STEM) education promises to enhance elementary students' engagement in science and related fields and to cultivate their problem-solving abilities. While STEM has become an increasingly popular reform initiative, it is still developing within the Montessori education community. There is limited research on STEM teaching and learning in Montessori classrooms, particularly from student perspectives. Previous studies suggest productive connections between reform-based pedagogies in mainstream science education and the Montessori method. Greater knowledge of this complementarity, and student perspectives on STEM, may benefit both Montessori and non-Montessori educators. This instrumental case study of two elementary classrooms documented student characterizations of aspects of STEM in the context of integrated STEM instruction over three months in the 2016-2017 school year. Findings show that the Montessori environment played an important role, and that students characterized STEM in inclusive, agentive, connected, helpful, creative, and increasingly critical ways. Implications for teaching and future research offer avenues to envision STEM education more holistically by leveraging the moral and humanistic aspects of Montessori philosophy.

  14. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  15. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    PubMed

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  16. A geometric approach to failure detection and identification in linear systems

    NASA Technical Reports Server (NTRS)

    Massoumnia, M. A.

    1986-01-01

    Using concepts of (C,A)-invariant and unobservability (complementary observability) subspaces, a geometric formulation of the failure detection and identification filter problem is stated. Using these geometric concepts, it is shown that it is possible to design a causal linear time-invariant processor that can be used to detect and uniquely identify a component failure in a linear time-invariant system, assuming: (1) The components can fail simultaneously, and (2) The components can fail only one at a time. In addition, a geometric formulation of Beard's failure detection filter problem is stated. This new formulation completely clarifies of output separability and mutual detectability introduced by Beard and also exploits the dual relationship between a restricted version of the failure detection and identification problem and the control decoupling problem. Moreover, the frequency domain interpretation of the results is used to relate the concepts of failure sensitive observers with the generalized parity relations introduced by Chow. This interpretation unifies the various failure detection and identification concepts and design procedures.

  17. Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems

    DOE PAGES

    Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; ...

    2012-01-01

    Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less

  18. The quasi-optimality criterion in the linear functional strategy

    NASA Astrophysics Data System (ADS)

    Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey

    2018-07-01

    The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.

  19. Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices

    NASA Astrophysics Data System (ADS)

    Passemier, Damien; McKay, Matthew R.; Chen, Yang

    2015-07-01

    Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.

  20. The Use of Efficient Broadcast Protocols in Asynchronous Distributed Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Schmuck, Frank Bernhard

    1988-01-01

    Reliable broadcast protocols are important tools in distributed and fault-tolerant programming. They are useful for sharing information and for maintaining replicated data in a distributed system. However, a wide range of such protocols has been proposed. These protocols differ in their fault tolerance and delivery ordering characteristics. There is a tradeoff between the cost of a broadcast protocol and how much ordering it provides. It is, therefore, desirable to employ protocols that support only a low degree of ordering whenever possible. This dissertation presents techniques for deciding how strongly ordered a protocol is necessary to solve a given application problem. It is shown that there are two distinct classes of application problems: problems that can be solved with efficient, asynchronous protocols, and problems that require global ordering. The concept of a linearization function that maps partially ordered sets of events to totally ordered histories is introduced. How to construct an asynchronous implementation that solves a given problem if a linearization function for it can be found is shown. It is proved that in general the question of whether a problem has an asynchronous solution is undecidable. Hence there exists no general algorithm that would automatically construct a suitable linearization function for a given problem. Therefore, an important subclass of problems that have certain commutativity properties are considered. Techniques for constructing asynchronous implementations for this class are presented. These techniques are useful for constructing efficient asynchronous implementations for a broad range of practical problems.

Top