Sample records for unconstrained optimization problem

  1. Parallel-vector computation for structural analysis and nonlinear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1990-01-01

    Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.

  2. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Yousong, E-mail: yousong.luo@rmit.edu.au

    This paper deals with a class of optimal control problems governed by an initial-boundary value problem of a parabolic equation. The case of semi-linear boundary control is studied where the control is applied to the system via the Wentzell boundary condition. The differentiability of the state variable with respect to the control is established and hence a necessary condition is derived for the optimal solution in the case of both unconstrained and constrained problems. The condition is also sufficient for the unconstrained convex problems. A second order condition is also derived.

  4. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  5. A modified form of conjugate gradient method for unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa

    2016-06-01

    Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.

  6. An indirect method for numerical optimization using the Kreisselmeir-Steinhauser function

    NASA Technical Reports Server (NTRS)

    Wrenn, Gregory A.

    1989-01-01

    A technique is described for converting a constrained optimization problem into an unconstrained problem. The technique transforms one of more objective functions into reduced objective functions, which are analogous to goal constraints used in the goal programming method. These reduced objective functions are appended to the set of constraints and an envelope of the entire function set is computed using the Kreisselmeir-Steinhauser function. This envelope function is then searched for an unconstrained minimum. The technique may be categorized as a SUMT algorithm. Advantages of this approach are the use of unconstrained optimization methods to find a constrained minimum without the draw down factor typical of penalty function methods, and that the technique may be started from the feasible or infeasible design space. In multiobjective applications, the approach has the advantage of locating a compromise minimum design without the need to optimize for each individual objective function separately.

  7. An overview of unconstrained free boundary problems

    PubMed Central

    Figalli, Alessio; Shahgholian, Henrik

    2015-01-01

    In this paper, we present a survey concerning unconstrained free boundary problems of type where B1 is the unit ball, Ω is an unknown open set, F1 and F2 are elliptic operators (admitting regular solutions), and is a functions space to be specified in each case. Our main objective is to discuss a unifying approach to the optimal regularity of solutions to the above matching problems, and list several open problems in this direction. PMID:26261367

  8. A modified conjugate gradient coefficient with inexact line search for unconstrained optimization

    NASA Astrophysics Data System (ADS)

    Aini, Nurul; Rivaie, Mohd; Mamat, Mustafa

    2016-11-01

    Conjugate gradient (CG) method is a line search algorithm mostly known for its wide application in solving unconstrained optimization problems. Its low memory requirements and global convergence properties makes it one of the most preferred method in real life application such as in engineering and business. In this paper, we present a new CG method based on AMR* and CD method for solving unconstrained optimization functions. The resulting algorithm is proven to have both the sufficient descent and global convergence properties under inexact line search. Numerical tests are conducted to assess the effectiveness of the new method in comparison to some previous CG methods. The results obtained indicate that our method is indeed superior.

  9. Structural Optimization for Reliability Using Nonlinear Goal Programming

    NASA Technical Reports Server (NTRS)

    El-Sayed, Mohamed E.

    1999-01-01

    This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.

  10. Optimization of flexible wing structures subject to strength and induced drag constraints

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1977-01-01

    An optimization procedure for designing wing structures subject to stress, strain, and drag constraints is presented. The optimization method utilizes an extended penalty function formulation for converting the constrained problem into a series of unconstrained ones. Newton's method is used to solve the unconstrained problems. An iterative analysis procedure is used to obtain the displacements of the wing structure including the effects of load redistribution due to the flexibility of the structure. The induced drag is calculated from the lift distribution. Approximate expressions for the constraints used during major portions of the optimization process enhance the efficiency of the procedure. A typical fighter wing is used to demonstrate the procedure. Aluminum and composite material designs are obtained. The tradeoff between weight savings and drag reduction is investigated.

  11. An expert system for choosing the best combination of options in a general purpose program for automated design synthesis

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.; Barthelemy, J.-F. M.

    1986-01-01

    An expert system called EXADS has been developed to aid users of the Automated Design Synthesis (ADS) general purpose optimization program. ADS has approximately 100 combinations of strategy, optimizer, and one-dimensional search options from which to choose. It is difficult for a nonexpert to make this choice. This expert system aids the user in choosing the best combination of options based on the users knowledge of the problem and the expert knowledge stored in the knowledge base. The knowledge base is divided into three categories; constrained problems, unconstrained problems, and constrained problems being treated as unconstrained problems. The inference engine and rules are written in LISP, contains about 200 rules, and executes on DEC-VAX (with Franz-LISP) and IBM PC (with IQ-LISP) computers.

  12. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  13. Performance comparison of a new hybrid conjugate gradient method under exact and inexact line searches

    NASA Astrophysics Data System (ADS)

    Ghani, N. H. A.; Mohamed, N. S.; Zull, N.; Shoid, S.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is one of iterative techniques prominently used in solving unconstrained optimization problems due to its simplicity, low memory storage, and good convergence analysis. This paper presents a new hybrid conjugate gradient method, named NRM1 method. The method is analyzed under the exact and inexact line searches in given conditions. Theoretically, proofs show that the NRM1 method satisfies the sufficient descent condition with both line searches. The computational result indicates that NRM1 method is capable in solving the standard unconstrained optimization problems used. On the other hand, the NRM1 method performs better under inexact line search compared with exact line search.

  14. Steepest descent method implementation on unconstrained optimization problem using C++ program

    NASA Astrophysics Data System (ADS)

    Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.

    2018-03-01

    Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.

  15. Hybrid DFP-CG method for solving unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Osman, Wan Farah Hanan Wan; Asrul Hery Ibrahim, Mohd; Mamat, Mustafa

    2017-09-01

    The conjugate gradient (CG) method and quasi-Newton method are both well known method for solving unconstrained optimization method. In this paper, we proposed a new method by combining the search direction between conjugate gradient method and quasi-Newton method based on BFGS-CG method developed by Ibrahim et al. The Davidon-Fletcher-Powell (DFP) update formula is used as an approximation of Hessian for this new hybrid algorithm. Numerical result showed that the new algorithm perform well than the ordinary DFP method and proven to posses both sufficient descent and global convergence properties.

  16. Adaptive Constrained Optimal Control Design for Data-Based Nonlinear Discrete-Time Systems With Critic-Only Structure.

    PubMed

    Luo, Biao; Liu, Derong; Wu, Huai-Ning

    2018-06-01

    Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice. To overcome this problem, the system transformation is first introduced with the general performance index. Then, the constrained optimal control problem is converted to an unconstrained optimal control problem. By introducing the action-state value function, i.e., Q-function, the VIQL algorithm is proposed to learn the optimal Q-function of the data-based unconstrained optimal control problem. The convergence results of the VIQL algorithm are established with an easy-to-realize initial condition . To implement the VIQL algorithm, the critic-only structure is developed, where only one neural network is required to approximate the Q-function. The converged Q-function obtained from the critic-only VIQL method is employed to design the adaptive constrained optimal controller based on the gradient descent scheme. Finally, the effectiveness of the developed adaptive control method is tested on three examples with computer simulation.

  17. Extremal Optimization for Quadratic Unconstrained Binary Problems

    NASA Astrophysics Data System (ADS)

    Boettcher, S.

    We present an implementation of τ-EO for quadratic unconstrained binary optimization (QUBO) problems. To this end, we transform modify QUBO from its conventional Boolean presentation into a spin glass with a random external field on each site. These fields tend to be rather large compared to the typical coupling, presenting EO with a challenging two-scale problem, exploring smaller differences in couplings effectively while sufficiently aligning with those strong external fields. However, we also find a simple solution to that problem that indicates that those external fields apparently tilt the energy landscape to a such a degree such that global minima become more easy to find than those of spin glasses without (or very small) fields. We explore the impact of the weight distribution of the QUBO formulation in the operations research literature and analyze their meaning in a spin-glass language. This is significant because QUBO problems are considered among the main contenders for NP-hard problems that could be solved efficiently on a quantum computer such as D-Wave.

  18. Distributed Learning, Extremum Seeking, and Model-Free Optimization for the Resilient Coordination of Multi-Agent Adversarial Groups

    DTIC Science & Technology

    2016-09-07

    been demonstrated on maximum power point tracking for photovoltaic arrays and for wind turbines . 3. ES has recently been implemented on the Mars...high-dimensional optimization problems . Extensions and applications of these techniques were developed during the realization of the project. 15...studied problems of dynamic average consensus and a class of unconstrained continuous-time optimization algorithms for the coordination of multiple

  19. MO-FG-CAMPUS-TeP2-01: A Graph Form ADMM Algorithm for Constrained Quadratic Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, X; Belcher, AH; Wiersma, R

    Purpose: In radiation therapy optimization the constraints can be either hard constraints which must be satisfied or soft constraints which are included but do not need to be satisfied exactly. Currently the voxel dose constraints are viewed as soft constraints and included as a part of the objective function and approximated as an unconstrained problem. However in some treatment planning cases the constraints should be specified as hard constraints and solved by constrained optimization. The goal of this work is to present a computation efficiency graph form alternating direction method of multipliers (ADMM) algorithm for constrained quadratic treatment planning optimizationmore » and compare it with several commonly used algorithms/toolbox. Method: ADMM can be viewed as an attempt to blend the benefits of dual decomposition and augmented Lagrangian methods for constrained optimization. Various proximal operators were first constructed as applicable to quadratic IMRT constrained optimization and the problem was formulated in a graph form of ADMM. A pre-iteration operation for the projection of a point to a graph was also proposed to further accelerate the computation. Result: The graph form ADMM algorithm was tested by the Common Optimization for Radiation Therapy (CORT) dataset including TG119, prostate, liver, and head & neck cases. Both unconstrained and constrained optimization problems were formulated for comparison purposes. All optimizations were solved by LBFGS, IPOPT, Matlab built-in toolbox, CVX (implementing SeDuMi) and Mosek solvers. For unconstrained optimization, it was found that LBFGS performs the best, and it was 3–5 times faster than graph form ADMM. However, for constrained optimization, graph form ADMM was 8 – 100 times faster than the other solvers. Conclusion: A graph form ADMM can be applied to constrained quadratic IMRT optimization. It is more computationally efficient than several other commercial and noncommercial optimizers and it also used significantly less computer memory.« less

  20. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    PubMed

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  1. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)β k ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  3. Quadratic Optimization in the Problems of Active Control of Sound

    NASA Technical Reports Server (NTRS)

    Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).

  4. Partial differential equations constrained combinatorial optimization on an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh

    Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.

  5. Numerical optimization methods for controlled systems with parameters

    NASA Astrophysics Data System (ADS)

    Tyatyushkin, A. I.

    2017-10-01

    First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.

  6. First-order convex feasibility algorithms for x-ray CT

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob S.; Pan, Xiaochuan

    2013-01-01

    Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution—thereby facilitating the IIR algorithm design process. Methods: An accelerated version of the Chambolle−Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. Results: The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144°. The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application. PMID:23464295

  7. Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables

    NASA Astrophysics Data System (ADS)

    Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.

    2018-02-01

    In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.

  8. A Comparison of Approaches for Solving Hard Graph-Theoretic Problems

    DTIC Science & Technology

    2015-04-29

    can be converted to a quadratic unconstrained binary optimization ( QUBO ) problem that uses 0/1-valued variables, and so they are often used...Frontiers in Physics, 2:5 (12 Feb 2014). [7] “Programming with QUBOs ,” (instructional document) D-Wave: The Quantum Computing Company, 2013. [8

  9. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows

    PubMed Central

    Wang, Di; Kleinberg, Robert D.

    2009-01-01

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596

  10. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.

    PubMed

    Wang, Di; Kleinberg, Robert D

    2009-11-28

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.

  11. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on in-house object-oriented optimization tool.

  12. Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2005-01-01

    We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.

  13. Number-unconstrained quantum sensing

    NASA Astrophysics Data System (ADS)

    Mitchell, Morgan W.

    2017-12-01

    Quantum sensing is commonly described as a constrained optimization problem: maximize the information gained about an unknown quantity using a limited number of particles. Important sensors including gravitational wave interferometers and some atomic sensors do not appear to fit this description, because there is no external constraint on particle number. Here, we develop the theory of particle-number-unconstrained quantum sensing, and describe how optimal particle numbers emerge from the competition of particle-environment and particle-particle interactions. We apply the theory to optical probing of an atomic medium modeled as a resonant, saturable absorber, and observe the emergence of well-defined finite optima without external constraints. The results contradict some expectations from number-constrained quantum sensing and show that probing with squeezed beams can give a large sensitivity advantage over classical strategies when each is optimized for particle number.

  14. Computational alternatives to obtain time optimal jet engine control. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Basso, R. J.; Leake, R. J.

    1976-01-01

    Two computational methods to determine an open loop time optimal control sequence for a simple single spool turbojet engine are described by a set of nonlinear differential equations. Both methods are modifications of widely accepted algorithms which can solve fixed time unconstrained optimal control problems with a free right end. Constrained problems to be considered have fixed right ends and free time. Dynamic programming is defined on a standard problem and it yields a successive approximation solution to the time optimal problem of interest. A feedback control law is obtained and it is then used to determine the corresponding open loop control sequence. The Fletcher-Reeves conjugate gradient method has been selected for adaptation to solve a nonlinear optimal control problem with state variable and control constraints.

  15. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  16. The use of optimization techniques to design controlled diffusion compressor blading

    NASA Technical Reports Server (NTRS)

    Sanger, N. L.

    1982-01-01

    A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.

  17. New displacement-based methods for optimal truss topology design

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.

    1991-01-01

    Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.

  18. A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Ortiz, Francisco

    2004-01-01

    COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.

  19. Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure

    NASA Astrophysics Data System (ADS)

    Bogani, C.; Gasparo, M. G.; Papini, A.

    2009-07-01

    We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.

  20. A new smoothing modified three-term conjugate gradient method for [Formula: see text]-norm minimization problem.

    PubMed

    Du, Shouqiang; Chen, Miao

    2018-01-01

    We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.

  1. An improved marriage in honey bees optimization algorithm for single objective unconstrained optimization.

    PubMed

    Celik, Yuksel; Ulker, Erkan

    2013-01-01

    Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms.

  2. Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  3. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  4. Exploring quantum computing application to satellite data assimilation

    NASA Astrophysics Data System (ADS)

    Cheung, S.; Zhang, S. Q.

    2015-12-01

    This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.

  5. Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre

    2014-07-01

    We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.

  6. An Improved Marriage in Honey Bees Optimization Algorithm for Single Objective Unconstrained Optimization

    PubMed Central

    Celik, Yuksel; Ulker, Erkan

    2013-01-01

    Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms. PMID:23935416

  7. A computational algorithm for spacecraft control and momentum management

    NASA Technical Reports Server (NTRS)

    Dzielski, John; Bergmann, Edward; Paradiso, Joseph

    1990-01-01

    Developments in the area of nonlinear control theory have shown how coordinate changes in the state and input spaces of a dynamical system can be used to transform certain nonlinear differential equations into equivalent linear equations. These techniques are applied to the control of a spacecraft equipped with momentum exchange devices. An optimal control problem is formulated that incorporates a nonlinear spacecraft model. An algorithm is developed for solving the optimization problem using feedback linearization to transform to an equivalent problem involving a linear dynamical constraint and a functional approximation technique to solve for the linear dynamics in terms of the control. The original problem is transformed into an unconstrained nonlinear quadratic program that yields an approximate solution to the original problem. Two examples are presented to illustrate the results.

  8. An Optimization Framework for Dynamic Hybrid Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenbo Du; Humberto E Garcia; Christiaan J.J. Paredis

    A computational framework for the efficient analysis and optimization of dynamic hybrid energy systems (HES) is developed. A microgrid system with multiple inputs and multiple outputs (MIMO) is modeled using the Modelica language in the Dymola environment. The optimization loop is implemented in MATLAB, with the FMI Toolbox serving as the interface between the computational platforms. Two characteristic optimization problems are selected to demonstrate the methodology and gain insight into the system performance. The first is an unconstrained optimization problem that optimizes the dynamic properties of the battery, reactor and generator to minimize variability in the HES. The second problemmore » takes operating and capital costs into consideration by imposing linear and nonlinear constraints on the design variables. The preliminary optimization results obtained in this study provide an essential step towards the development of a comprehensive framework for designing HES.« less

  9. Global Optimization of Interplanetary Trajectories in the Presence of Realistic Mission Contraints

    NASA Technical Reports Server (NTRS)

    Hinckley, David, Jr.; Englander, Jacob; Hitt, Darren

    2015-01-01

    Interplanetary missions are often subject to difficult constraints, like solar phase angle upon arrival at the destination, velocity at arrival, and altitudes for flybys. Preliminary design of such missions is often conducted by solving the unconstrained problem and then filtering away solutions which do not naturally satisfy the constraints. However this can bias the search into non-advantageous regions of the solution space, so it can be better to conduct preliminary design with the full set of constraints imposed. In this work two stochastic global search methods are developed which are well suited to the constrained global interplanetary trajectory optimization problem.

  10. Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Watkins, Edward Francis

    1995-01-01

    A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.

  11. Optimal design of solidification processes

    NASA Technical Reports Server (NTRS)

    Dantzig, Jonathan A.; Tortorelli, Daniel A.

    1991-01-01

    An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.

  12. Metabolic flux estimation using particle swarm optimization with penalty function.

    PubMed

    Long, Hai-Xia; Xu, Wen-Bo; Sun, Jun

    2009-01-01

    Metabolic flux estimation through 13C trace experiment is crucial for quantifying the intracellular metabolic fluxes. In fact, it corresponds to a constrained optimization problem that minimizes a weighted distance between measured and simulated results. In this paper, we propose particle swarm optimization (PSO) with penalty function to solve 13C-based metabolic flux estimation problem. The stoichiometric constraints are transformed to an unconstrained one, by penalizing the constraints and building a single objective function, which in turn is minimized using PSO algorithm for flux quantification. The proposed algorithm is applied to estimate the central metabolic fluxes of Corynebacterium glutamicum. From simulation results, it is shown that the proposed algorithm has superior performance and fast convergence ability when compared to other existing algorithms.

  13. A general-purpose optimization program for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Sugimoto, H.

    1986-01-01

    A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis) is a FORTRAN program for nonlinear constrained (or unconstrained) function minimization. The optimization process is segmented into three levels: Strategy, Optimizer, and One-dimensional search. At each level, several options are available so that a total of nearly 100 possible combinations can be created. An example of available combinations is the Augmented Lagrange Multiplier method, using the BFGS variable metric unconstrained minimization together with polynomial interpolation for the one-dimensional search.

  14. Guided particle swarm optimization method to solve general nonlinear optimization problems

    NASA Astrophysics Data System (ADS)

    Abdelhalim, Alyaa; Nakata, Kazuhide; El-Alem, Mahmoud; Eltawil, Amr

    2018-04-01

    The development of hybrid algorithms is becoming an important topic in the global optimization research area. This article proposes a new technique in hybridizing the particle swarm optimization (PSO) algorithm and the Nelder-Mead (NM) simplex search algorithm to solve general nonlinear unconstrained optimization problems. Unlike traditional hybrid methods, the proposed method hybridizes the NM algorithm inside the PSO to improve the velocities and positions of the particles iteratively. The new hybridization considers the PSO algorithm and NM algorithm as one heuristic, not in a sequential or hierarchical manner. The NM algorithm is applied to improve the initial random solution of the PSO algorithm and iteratively in every step to improve the overall performance of the method. The performance of the proposed method was tested over 20 optimization test functions with varying dimensions. Comprehensive comparisons with other methods in the literature indicate that the proposed solution method is promising and competitive.

  15. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  16. Analytical investigations in aircraft and spacecraft trajectory optimization and optimal guidance

    NASA Technical Reports Server (NTRS)

    Markopoulos, Nikos; Calise, Anthony J.

    1995-01-01

    A collection of analytical studies is presented related to unconstrained and constrained aircraft (a/c) energy-state modeling and to spacecraft (s/c) motion under continuous thrust. With regard to a/c unconstrained energy-state modeling, the physical origin of the singular perturbation parameter that accounts for the observed 2-time-scale behavior of a/c during energy climbs is identified and explained. With regard to the constrained energy-state modeling, optimal control problems are studied involving active state-variable inequality constraints. Departing from the practical deficiencies of the control programs for such problems that result from the traditional formulations, a complete reformulation is proposed for these problems which, in contrast to the old formulation, will presumably lead to practically useful controllers that can track an inequality constraint boundary asymptotically, and even in the presence of 2-sided perturbations about it. Finally, with regard to s/c motion under continuous thrust, a thrust program is proposed for which the equations of 2-dimensional motion of a space vehicle in orbit, viewed as a point mass, afford an exact analytic solution. The thrust program arises under the assumption of tangential thrust from the costate system corresponding to minimum-fuel, power-limited, coplanar transfers between two arbitrary conics. The thrust program can be used not only with power-limited propulsion systems, but also with any propulsion system capable of generating continuous thrust of controllable magnitude, and, for propulsion types and classes of transfers for which it is sufficiently optimal the results of this report suggest a method of maneuvering during planetocentric or heliocentric orbital operations, requiring a minimum amount of computation; thus uniquely suitable for real-time feedback guidance implementations.

  17. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    NASA Astrophysics Data System (ADS)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  18. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  19. UAV path planning using artificial potential field method updated by optimal control theory

    NASA Astrophysics Data System (ADS)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  20. Optimal mistuning for enhanced aeroelastic stability of transonic fans

    NASA Technical Reports Server (NTRS)

    Hall, K. C.; Crawley, E. F.

    1983-01-01

    An inverse design procedure was developed for the design of a mistuned rotor. The design requirements are that the stability margin of the eigenvalues of the aeroelastic system be greater than or equal to some minimum stability margin, and that the mass added to each blade be positive. The objective was to achieve these requirements with a minimal amount of mistuning. Hence, the problem was posed as a constrained optimization problem. The constrained minimization problem was solved by the technique of mathematical programming via augmented Lagrangians. The unconstrained minimization phase of this technique was solved by the variable metric method. The bladed disk was modelled as being composed of a rigid disk mounted on a rigid shaft. Each of the blades were modelled with a single tosional degree of freedom.

  1. NEWSUMT: A FORTRAN program for inequality constrained function minimization, users guide

    NASA Technical Reports Server (NTRS)

    Miura, H.; Schmit, L. A., Jr.

    1979-01-01

    A computer program written in FORTRAN subroutine form for the solution of linear and nonlinear constrained and unconstrained function minimization problems is presented. The algorithm is the sequence of unconstrained minimizations using the Newton's method for unconstrained function minimizations. The use of NEWSUMT and the definition of all parameters are described.

  2. Performance evaluation of firefly algorithm with variation in sorting for non-linear benchmark problems

    NASA Astrophysics Data System (ADS)

    Umbarkar, A. J.; Balande, U. T.; Seth, P. D.

    2017-06-01

    The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.

  3. Graph cuts via l1 norm minimization.

    PubMed

    Bhusnurmath, Arvind; Taylor, Camillo J

    2008-10-01

    Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.

  4. Robust penalty method for structural synthesis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1983-01-01

    The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.

  5. A quantum annealing approach for fault detection and diagnosis of graph-based systems

    NASA Astrophysics Data System (ADS)

    Perdomo-Ortiz, A.; Fluegemann, J.; Narasimhan, S.; Biswas, R.; Smelyanskiy, V. N.

    2015-02-01

    Diagnosing the minimal set of faults capable of explaining a set of given observations, e.g., from sensor readouts, is a hard combinatorial optimization problem usually tackled with artificial intelligence techniques. We present the mapping of this combinatorial problem to quadratic unconstrained binary optimization (QUBO), and the experimental results of instances embedded onto a quantum annealing device with 509 quantum bits. Besides being the first time a quantum approach has been proposed for problems in the advanced diagnostics community, to the best of our knowledge this work is also the first research utilizing the route Problem → QUBO → Direct embedding into quantum hardware, where we are able to implement and tackle problem instances with sizes that go beyond previously reported toy-model proof-of-principle quantum annealing implementations; this is a significant leap in the solution of problems via direct-embedding adiabatic quantum optimization. We discuss some of the programmability challenges in the current generation of the quantum device as well as a few possible ways to extend this work to more complex arbitrary network graphs.

  6. Bayesian Optimization Under Mixed Constraints with A Slack-Variable Augmented Lagrangian

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Picheny, Victor; Gramacy, Robert B.; Wild, Stefan M.

    An augmented Lagrangian (AL) can convert a constrained optimization problem into a sequence of simpler (e.g., unconstrained) problems, which are then usually solved with local solvers. Recently, surrogate-based Bayesian optimization (BO) sub-solvers have been successfully deployed in the AL framework for a more global search in the presence of inequality constraints; however, a drawback was that expected improvement (EI) evaluations relied on Monte Carlo. Here we introduce an alternative slack variable AL, and show that in this formulation the EI may be evaluated with library routines. The slack variables furthermore facilitate equality as well as inequality constraints, and mixtures thereof.more » We show our new slack “ALBO” compares favorably to the original. Its superiority over conventional alternatives is reinforced on several mixed constraint examples.« less

  7. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  8. Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin

    2007-12-01

    We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.

  9. A three-term conjugate gradient method under the strong-Wolfe line search

    NASA Astrophysics Data System (ADS)

    Khadijah, Wan; Rivaie, Mohd; Mamat, Mustafa

    2017-08-01

    Recently, numerous studies have been concerned in conjugate gradient methods for solving large-scale unconstrained optimization method. In this paper, a three-term conjugate gradient method is proposed for unconstrained optimization which always satisfies sufficient descent direction and namely as Three-Term Rivaie-Mustafa-Ismail-Leong (TTRMIL). Under standard conditions, TTRMIL method is proved to be globally convergent under strong-Wolfe line search. Finally, numerical results are provided for the purpose of comparison.

  10. A new family of Polak-Ribiere-Polyak conjugate gradient method with the strong-Wolfe line search

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    Conjugate gradient (CG) method is an important technique in unconstrained optimization, due to its effectiveness and low memory requirements. The focus of this paper is to introduce a new CG method for solving large scale unconstrained optimization. Theoretical proofs show that the new method fulfills sufficient descent condition if strong Wolfe-Powell inexact line search is used. Besides, computational results show that our proposed method outperforms to other existing CG methods.

  11. Structural optimization with approximate sensitivities

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.

    1994-01-01

    Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.

  12. QSPIN: A High Level Java API for Quantum Computing Experimentation

    NASA Technical Reports Server (NTRS)

    Barth, Tim

    2017-01-01

    QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.

  13. Boosting quantum annealer performance via sample persistence

    NASA Astrophysics Data System (ADS)

    Karimi, Hamed; Rosenberg, Gili

    2017-07-01

    We propose a novel method for reducing the number of variables in quadratic unconstrained binary optimization problems, using a quantum annealer (or any sampler) to fix the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are usually much easier for the quantum annealer to solve, due to their being smaller and consisting of disconnected components. This approach significantly increases the success rate and number of observations of the best known energy value in samples obtained from the quantum annealer, when compared with calling the quantum annealer without using it, even when using fewer annealing cycles. Use of the method results in a considerable improvement in success metrics even for problems with high-precision couplers and biases, which are more challenging for the quantum annealer to solve. The results are further enhanced by applying the method iteratively and combining it with classical pre-processing. We present results for both Chimera graph-structured problems and embedded problems from a real-world application.

  14. An adaptive evolutionary multi-objective approach based on simulated annealing.

    PubMed

    Li, H; Landa-Silva, D

    2011-01-01

    A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.

  15. On the Convergence Analysis of the Optimized Gradient Method.

    PubMed

    Kim, Donghwan; Fessler, Jeffrey A

    2017-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.

  16. On the Convergence Analysis of the Optimized Gradient Method

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2016-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707

  17. Optimal projection method determination by Logdet Divergence and perturbed von-Neumann Divergence.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Qiu, Yushan; Cheng, Xiao-Qing

    2017-12-14

    Positive semi-definiteness is a critical property in kernel methods for Support Vector Machine (SVM) by which efficient solutions can be guaranteed through convex quadratic programming. However, a lot of similarity functions in applications do not produce positive semi-definite kernels. We propose projection method by constructing projection matrix on indefinite kernels. As a generalization of the spectrum method (denoising method and flipping method), the projection method shows better or comparable performance comparing to the corresponding indefinite kernel methods on a number of real world data sets. Under the Bregman matrix divergence theory, we can find suggested optimal λ in projection method using unconstrained optimization in kernel learning. In this paper we focus on optimal λ determination, in the pursuit of precise optimal λ determination method in unconstrained optimization framework. We developed a perturbed von-Neumann divergence to measure kernel relationships. We compared optimal λ determination with Logdet Divergence and perturbed von-Neumann Divergence, aiming at finding better λ in projection method. Results on a number of real world data sets show that projection method with optimal λ by Logdet divergence demonstrate near optimal performance. And the perturbed von-Neumann Divergence can help determine a relatively better optimal projection method. Projection method ia easy to use for dealing with indefinite kernels. And the parameter embedded in the method can be determined through unconstrained optimization under Bregman matrix divergence theory. This may provide a new way in kernel SVMs for varied objectives.

  18. Differential geometric treewidth estimation in adiabatic quantum computation

    NASA Astrophysics Data System (ADS)

    Wang, Chi; Jonckheere, Edmond; Brun, Todd

    2016-10-01

    The D-Wave adiabatic quantum computing platform is designed to solve a particular class of problems—the Quadratic Unconstrained Binary Optimization (QUBO) problems. Due to the particular "Chimera" physical architecture of the D-Wave chip, the logical problem graph at hand needs an extra process called minor embedding in order to be solvable on the D-Wave architecture. The latter problem is itself NP-hard. In this paper, we propose a novel polynomial-time approximation to the closely related treewidth based on the differential geometric concept of Ollivier-Ricci curvature. The latter runs in polynomial time and thus could significantly reduce the overall complexity of determining whether a QUBO problem is minor embeddable, and thus solvable on the D-Wave architecture.

  19. Analytical design of an industrial two-term controller for optimal regulatory control of open-loop unstable processes under operational constraints.

    PubMed

    Tchamna, Rodrigue; Lee, Moonyong

    2018-01-01

    This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Constrained and Unconstrained Variational Finite Element Formulation of Solutions to a Stress Wave Problem - a Numerical Comparison.

    DTIC Science & Technology

    1982-10-01

    Element Unconstrained Variational Formulations," Innovativ’e Numerical Analysis For the Applied Engineering Science, R. P. Shaw, et at, Fitor...Initial Boundary Value of Gun Dynamics Solved by Finite Element Unconstrained Variational Formulations," Innovative Numerical Analysis For the Applied ... Engineering Science, R. P. Shaw, et al, Editors, University Press of Virginia, Charlottesville, pp. 733-741, 1980. 2 J. J. Wu, "Solutions to Initial

  1. Adaptive nearly optimal control for a class of continuous-time nonaffine nonlinear systems with inequality constraints.

    PubMed

    Fan, Quan-Yong; Yang, Guang-Hong

    2017-01-01

    The state inequality constraints have been hardly considered in the literature on solving the nonlinear optimal control problem based the adaptive dynamic programming (ADP) method. In this paper, an actor-critic (AC) algorithm is developed to solve the optimal control problem with a discounted cost function for a class of state-constrained nonaffine nonlinear systems. To overcome the difficulties resulting from the inequality constraints and the nonaffine nonlinearities of the controlled systems, a novel transformation technique with redesigned slack functions and a pre-compensator method are introduced to convert the constrained optimal control problem into an unconstrained one for affine nonlinear systems. Then, based on the policy iteration (PI) algorithm, an online AC scheme is proposed to learn the nearly optimal control policy for the obtained affine nonlinear dynamics. Using the information of the nonlinear model, novel adaptive update laws are designed to guarantee the convergence of the neural network (NN) weights and the stability of the affine nonlinear dynamics without the requirement for the probing signal. Finally, the effectiveness of the proposed method is validated by simulation studies. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Genetic algorithm-based multi-objective optimal absorber system for three-dimensional seismic structures

    NASA Astrophysics Data System (ADS)

    Ren, Wenjie; Li, Hongnan; Song, Gangbing; Huo, Linsheng

    2009-03-01

    The problem of optimizing an absorber system for three-dimensional seismic structures is addressed. The objective is to determine the number and position of absorbers to minimize the coupling effects of translation-torsion of structures at minimum cost. A procedure for a multi-objective optimization problem is developed by integrating a dominance-based selection operator and a dominance-based penalty function method. Based on the two-branch tournament genetic algorithm, the selection operator is constructed by evaluating individuals according to their dominance in one run. The technique guarantees the better performing individual winning its competition, provides a slight selection pressure toward individuals and maintains diversity in the population. Moreover, due to the evaluation for individuals in each generation being finished in one run, less computational effort is taken. Penalty function methods are generally used to transform a constrained optimization problem into an unconstrained one. The dominance-based penalty function contains necessary information on non-dominated character and infeasible position of an individual, essential for success in seeking a Pareto optimal set. The proposed approach is used to obtain a set of non-dominated designs for a six-storey three-dimensional building with shape memory alloy dampers subjected to earthquake.

  3. A case study in programming a quantum annealer for hard operational planning problems

    NASA Astrophysics Data System (ADS)

    Rieffel, Eleanor G.; Venturelli, Davide; O'Gorman, Bryan; Do, Minh B.; Prystay, Elicia M.; Smelyanskiy, Vadim N.

    2015-01-01

    We report on a case study in programming an early quantum annealer to attack optimization problems related to operational planning. While a number of studies have looked at the performance of quantum annealers on problems native to their architecture, and others have examined performance of select problems stemming from an application area, ours is one of the first studies of a quantum annealer's performance on parametrized families of hard problems from a practical domain. We explore two different general mappings of planning problems to quadratic unconstrained binary optimization (QUBO) problems, and apply them to two parametrized families of planning problems, navigation-type and scheduling-type. We also examine two more compact, but problem-type specific, mappings to QUBO, one for the navigation-type planning problems and one for the scheduling-type planning problems. We study embedding properties and parameter setting and examine their effect on the efficiency with which the quantum annealer solves these problems. From these results, we derive insights useful for the programming and design of future quantum annealers: problem choice, the mapping used, the properties of the embedding, and the annealing profile all matter, each significantly affecting the performance.

  4. Quantum-enhanced reinforcement learning for finite-episode games with discrete state spaces

    NASA Astrophysics Data System (ADS)

    Neukart, Florian; Von Dollen, David; Seidel, Christian; Compostella, Gabriele

    2017-12-01

    Quantum annealing algorithms belong to the class of metaheuristic tools, applicable for solving binary optimization problems. Hardware implementations of quantum annealing, such as the quantum annealing machines produced by D-Wave Systems, have been subject to multiple analyses in research, with the aim of characterizing the technology's usefulness for optimization and sampling tasks. Here, we present a way to partially embed both Monte Carlo policy iteration for finding an optimal policy on random observations, as well as how to embed n sub-optimal state-value functions for approximating an improved state-value function given a policy for finite horizon games with discrete state spaces on a D-Wave 2000Q quantum processing unit (QPU). We explain how both problems can be expressed as a quadratic unconstrained binary optimization (QUBO) problem, and show that quantum-enhanced Monte Carlo policy evaluation allows for finding equivalent or better state-value functions for a given policy with the same number episodes compared to a purely classical Monte Carlo algorithm. Additionally, we describe a quantum-classical policy learning algorithm. Our first and foremost aim is to explain how to represent and solve parts of these problems with the help of the QPU, and not to prove supremacy over every existing classical policy evaluation algorithm.

  5. VDLLA: A virtual daddy-long legs optimization

    NASA Astrophysics Data System (ADS)

    Yaakub, Abdul Razak; Ghathwan, Khalil I.

    2016-08-01

    Swarm intelligence is a strong optimization algorithm based on a biological behavior of insects or animals. The success of any optimization algorithm is depending on the balance between exploration and exploitation. In this paper, we present a new swarm intelligence algorithm, which is based on daddy long legs spider (VDLLA) as a new optimization algorithm with virtual behavior. In VDLLA, each agent (spider) has nine positions which represent the legs of spider and each position represent one solution. The proposed VDLLA is tested on four standard functions using average fitness, Medium fitness and standard deviation. The results of proposed VDLLA have been compared against Particle Swarm Optimization (PSO), Differential Evolution (DE) and Bat Inspired Algorithm (BA). Additionally, the T-Test has been conducted to show the significant deference between our proposed and other algorithms. VDLLA showed very promising results on benchmark test functions for unconstrained optimization problems and also significantly improved the original swarm algorithms.

  6. A "Reverse-Schur" Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design.

    PubMed

    Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.

  7. A “Reverse-Schur” Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design

    PubMed Central

    Bardhan, Jaydeep P.; Altman, Michael D.

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839

  8. Development and comparison of advanced reduced-basis methods for the transient structural analysis of unconstrained structures

    NASA Technical Reports Server (NTRS)

    Mcgowan, David M.; Bostic, Susan W.; Camarda, Charles J.

    1993-01-01

    The development of two advanced reduced-basis methods, the force derivative method and the Lanczos method, and two widely used modal methods, the mode displacement method and the mode acceleration method, for transient structural analysis of unconstrained structures is presented. Two example structural problems are studied: an undamped, unconstrained beam subject to a uniformly distributed load which varies as a sinusoidal function of time and an undamped high-speed civil transport aircraft subject to a normal wing tip load which varies as a sinusoidal function of time. These example problems are used to verify the methods and to compare the relative effectiveness of each of the four reduced-basis methods for performing transient structural analyses on unconstrained structures. The methods are verified with a solution obtained by integrating directly the full system of equations of motion, and they are compared using the number of basis vectors required to obtain a desired level of accuracy and the associated computational times as comparison criteria.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Simonetto, Andrea

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less

  10. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least-squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on an in-house object-oriented optimization tool. During the numerical optimization procedure, a design jig-shape is determined by the baseline jig-shape and basis functions. A total of 12 symmetric mode shapes of the cruise-weight configuration, rigid pitch shape, rigid left and right stabilator rotation shapes, and a residual shape are selected as sixteen basis functions. After three optimization runs, the trim shape error distribution is improved, and the maximum trim shape error of 0.9844 inches of the starting configuration becomes 0.00367 inch by the end of the third optimization run.

  11. Optimally stopped variational quantum algorithms

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Shabani, Alireza

    2018-04-01

    Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.

  12. Conjugate gradient determination of optimal plane changes for a class of three-impulse transfers between noncoplanar circular orbits

    NASA Technical Reports Server (NTRS)

    Burrows, R. R.

    1972-01-01

    A particular type of three-impulse transfer between two circular orbits is analyzed. The possibility of three plane changes is recognized, and the problem is to optimally distribute these plane changes to minimize the sum of the individual impulses. Numerical difficulties and their solution are discussed. Numerical results obtained from a conjugate gradient technique are presented for both the case where the individual plane changes are unconstrained and for the case where they are constrained. Possibly not unexpectedly, multiple minima are found. The techniques presented could be extended to the finite burn case, but primarily the contents are addressed to preliminary mission design and vehicle sizing.

  13. A cubic extended interior penalty function for structural optimization

    NASA Technical Reports Server (NTRS)

    Prasad, B.; Haftka, R. T.

    1979-01-01

    This paper describes an optimization procedure for the minimum weight design of complex structures. The procedure is based on a new cubic extended interior penalty function (CEIPF) used with the sequence of unconstrained minimization technique (SUMT) and Newton's method. The Hessian matrix of the penalty function is approximated using only constraints and their derivatives. The CEIPF is designed to minimize the error in the approximation of the Hessian matrix, and as a result the number of structural analyses required is small and independent of the number of design variables. Three example problems are reported. The number of structural analyses is reduced by as much as 50 per cent below previously reported results.

  14. An optimal output feedback gain variation scheme for the control of plants exhibiting gross parameter changes

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.

    1987-01-01

    A concept for optimally designing output feedback controllers for plants whose dynamics exhibit gross changes over their operating regimes was developed. This was to formulate the design problem in such a way that the implemented feedback gains vary as the output of a dynamical system whose independent variable is a scalar parameterization of the plant operating point. The results of this effort include derivation of necessary conditions for optimality for the general problem formulation, and for several simplified cases. The question of existence of a solution to the design problem was also examined, and it was shown that the class of gain variation schemes developed are capable of achieving gain variation histories which are arbitrarily close to the unconstrained gain solution for each point in the plant operating range. The theory was implemented in a feedback design algorithm, which was exercised in a numerical example. The results are applicable to the design of practical high-performance feedback controllers for plants whose dynamics vary significanly during operation. Many aerospace systems fall into this category.

  15. Trajectory optimization and guidance law development for national aerospace plane applications

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Flandro, G. A.; Corban, J. E.

    1988-01-01

    The work completed to date is comprised of the following: a simple vehicle model representative of the aerospace plane concept in the hypersonic flight regime, fuel-optimal climb profiles for the unconstrained and dynamic pressure constrained cases generated using a reduced order dynamic model, an analytic switching condition for transition to rocket powered flight as orbital velocity is approached, simple feedback guidance laws for both the unconstrained and dynamic pressure constrained cases derived via singular perturbation theory and a nonlinear transformation technique, and numerical simulation results for ascent to orbit in the dynamic pressure constrained case.

  16. A sequential solution for anisotropic total variation image denoising with interval constraints

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Noo, Frédéric

    2017-09-01

    We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.

  17. Application of augmented-Lagrangian methods in meteorology: Comparison of different conjugate-gradient codes for large-scale minimization

    NASA Technical Reports Server (NTRS)

    Navon, I. M.

    1984-01-01

    A Lagrange multiplier method using techniques developed by Bertsekas (1982) was applied to solving the problem of enforcing simultaneous conservation of the nonlinear integral invariants of the shallow water equations on a limited area domain. This application of nonlinear constrained optimization is of the large dimensional type and the conjugate gradient method was found to be the only computationally viable method for the unconstrained minimization. Several conjugate-gradient codes were tested and compared for increasing accuracy requirements. Robustness and computational efficiency were the principal criteria.

  18. Gaussian Mean Field Lattice Gas

    NASA Astrophysics Data System (ADS)

    Scoppola, Benedetto; Troiani, Alessio

    2018-03-01

    We study rigorously a lattice gas version of the Sherrington-Kirckpatrick spin glass model. In discrete optimization literature this problem is known as unconstrained binary quadratic programming and it belongs to the class NP-hard. We prove that the fluctuations of the ground state energy tend to vanish in the thermodynamic limit, and we give a lower bound of such ground state energy. Then we present a heuristic algorithm, based on a probabilistic cellular automaton, which seems to be able to find configurations with energy very close to the minimum, even for quite large instances.

  19. Adiabatic Quantum Computation with Neutral Cesium

    NASA Astrophysics Data System (ADS)

    Hankin, Aaron; Parazzoli, L.; Chou, Chin-Wen; Jau, Yuan-Yu; Burns, George; Young, Amber; Kemme, Shanalyn; Ferdinand, Andrew; Biedermann, Grant; Landahl, Andrew; Ivan H. Deutsch Collaboration; Mark Saffman Collaboration

    2013-05-01

    We are implementing a new platform for adiabatic quantum computation (AQC) based on trapped neutral atoms whose coupling is mediated by the dipole-dipole interactions of Rydberg states. Ground state cesium atoms are dressed by laser fields in a manner conditional on the Rydberg blockade mechanism, thereby providing the requisite entangling interactions. As a benchmark we study a Quadratic Unconstrained Binary Optimization (QUBO) problem whose solution is found in the ground state spin configuration of an Ising-like model. University of New Mexico: Ivan H. Deutsch, Tyler Keating, Krittika Goyal.

  20. Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki

    2013-01-01

    A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.

  1. ADS: A FORTRAN program for automated design synthesis: Version 1.10

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1985-01-01

    A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.

  2. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  3. Global Optimization of Low-Thrust Interplanetary Trajectories Subject to Operational Constraints

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.; Hinckley, David

    2016-01-01

    Low-thrust interplanetary space missions are highly complex and there can be many locally optimal solutions. While several techniques exist to search for globally optimal solutions to low-thrust trajectory design problems, they are typically limited to unconstrained trajectories. The operational design community in turn has largely avoided using such techniques and has primarily focused on accurate constrained local optimization combined with grid searches and intuitive design processes at the expense of efficient exploration of the global design space. This work is an attempt to bridge the gap between the global optimization and operational design communities by presenting a mathematical framework for global optimization of low-thrust trajectories subject to complex constraints including the targeting of planetary landing sites, a solar range constraint to simplify the thermal design of the spacecraft, and a real-world multi-thruster electric propulsion system that must switch thrusters on and off as available power changes over the course of a mission.

  4. Newton's method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    More, J. J.; Sorensen, D. C.

    1982-02-01

    Newton's method plays a central role in the development of numerical techniques for optimization. In fact, most of the current practical methods for optimization can be viewed as variations on Newton's method. It is therefore important to understand Newton's method as an algorithm in its own right and as a key introduction to the most recent ideas in this area. One of the aims of this expository paper is to present and analyze two main approaches to Newton's method for unconstrained minimization: the line search approach and the trust region approach. The other aim is to present some of themore » recent developments in the optimization field which are related to Newton's method. In particular, we explore several variations on Newton's method which are appropriate for large scale problems, and we also show how quasi-Newton methods can be derived quite naturally from Newton's method.« less

  5. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  6. An Improved Quantum-Behaved Particle Swarm Optimization Algorithm with Elitist Breeding for Unconstrained Optimization.

    PubMed

    Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing

    2015-01-01

    An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.

  7. Transfers between libration-point orbits in the elliptic restricted problem

    NASA Astrophysics Data System (ADS)

    Hiday-Johnston, L. A.; Howell, K. C.

    1994-04-01

    A strategy is formulated to design optimal time-fixed impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior L1 libration point of the Sun-Earth/Moon barycenter system. The adjoint equation in terms of rotating coordinates in the elliptic restricted three-body problem is shown to be of a distinctly different form from that obtained in the analysis of trajectories in the two-body problem. Also, the necessary conditions for a time-fixed two-impulse transfer to be optimal are stated in terms of the primer vector. Primer vector theory is then extended to nonoptimal impulsive trajectories in order to establish a criterion whereby the addition of an interior impulse reduces total fuel expenditure. The necessary conditions for the local optimality of a transfer containing additional impulses are satisfied by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses. Determination of location, orientation, and magnitude of each additional impulse is accomplished by the unconstrained minimization of the cost function using a multivariable search method. Results indicate that substantial savings in fuel can be achieved by the addition of interior impulsive maneuvers on transfers between libration-point orbits.

  8. An efficient and practical approach to obtain a better optimum solution for structural optimization

    NASA Astrophysics Data System (ADS)

    Chen, Ting-Yu; Huang, Jyun-Hao

    2013-08-01

    For many structural optimization problems, it is hard or even impossible to find the global optimum solution owing to unaffordable computational cost. An alternative and practical way of thinking is thus proposed in this research to obtain an optimum design which may not be global but is better than most local optimum solutions that can be found by gradient-based search methods. The way to reach this goal is to find a smaller search space for gradient-based search methods. It is found in this research that data mining can accomplish this goal easily. The activities of classification, association and clustering in data mining are employed to reduce the original design space. For unconstrained optimization problems, the data mining activities are used to find a smaller search region which contains the global or better local solutions. For constrained optimization problems, it is used to find the feasible region or the feasible region with better objective values. Numerical examples show that the optimum solutions found in the reduced design space by sequential quadratic programming (SQP) are indeed much better than those found by SQP in the original design space. The optimum solutions found in a reduced space by SQP sometimes are even better than the solution found using a hybrid global search method with approximate structural analyses.

  9. Text-line extraction in handwritten Chinese documents based on an energy minimization framework.

    PubMed

    Koo, Hyung Il; Cho, Nam Ik

    2012-03-01

    Text-line extraction in unconstrained handwritten documents remains a challenging problem due to nonuniform character scale, spatially varying text orientation, and the interference between text lines. In order to address these problems, we propose a new cost function that considers the interactions between text lines and the curvilinearity of each text line. Precisely, we achieve this goal by introducing normalized measures for them, which are based on an estimated line spacing. We also present an optimization method that exploits the properties of our cost function. Experimental results on a database consisting of 853 handwritten Chinese document images have shown that our method achieves a detection rate of 99.52% and an error rate of 0.32%, which outperforms conventional methods.

  10. Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks

    PubMed Central

    Chen, Jianhui; Liu, Ji; Ye, Jieping

    2013-01-01

    We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms. PMID:24077658

  11. Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks.

    PubMed

    Chen, Jianhui; Liu, Ji; Ye, Jieping

    2012-02-01

    We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.

  12. Distance majorization and its applications

    PubMed Central

    Chi, Eric C.; Zhou, Hua; Lange, Kenneth

    2014-01-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563

  13. Fitting Nonlinear Curves by use of Optimization Techniques

    NASA Technical Reports Server (NTRS)

    Hill, Scott A.

    2005-01-01

    MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.

  14. Still-to-video face recognition in unconstrained environments

    NASA Astrophysics Data System (ADS)

    Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing

    2015-02-01

    Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.

  15. Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2005-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.

  16. Dynamic optimization and its relation to classical and quantum constrained systems

    NASA Astrophysics Data System (ADS)

    Contreras, Mauricio; Pellicer, Rely; Villena, Marcelo

    2017-08-01

    We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop λ-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Ψ(x , t) =e iS(x , t) in the quantum Schrödinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x , t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem.

  17. Optimal lifting ascent trajectories for the space shuttle

    NASA Technical Reports Server (NTRS)

    Rau, T. R.; Elliott, J. R.

    1972-01-01

    The performance gains which are possible through the use of optimal trajectories for a particular space shuttle configuration are discussed. The spacecraft configurations and aerodynamic characteristics are described. Shuttle mission payload capability is examined with respect to the optimal orbit inclination for unconstrained, constrained, and nonlifting conditions. The effects of velocity loss and heating rate on the optimal ascent trajectory are investigated.

  18. Time-optimal thermalization of single-mode Gaussian states

    NASA Astrophysics Data System (ADS)

    Carlini, Alberto; Mari, Andrea; Giovannetti, Vittorio

    2014-11-01

    We consider the problem of time-optimal control of a continuous bosonic quantum system subject to the action of a Markovian dissipation. In particular, we consider the case of a one-mode Gaussian quantum system prepared in an arbitrary initial state and which relaxes to the steady state due to the action of the dissipative channel. We assume that the unitary part of the dynamics is represented by Gaussian operations which preserve the Gaussian nature of the quantum state, i.e., arbitrary phase rotations, bounded squeezing, and unlimited displacements. In the ideal ansatz of unconstrained quantum control (i.e., when the unitary phase rotations, squeezing, and displacement of the mode can be performed instantaneously), we study how control can be optimized for speeding up the relaxation towards the fixed point of the dynamics and we analytically derive the optimal relaxation time. Our model has potential and interesting applications to the control of modes of electromagnetic radiation and of trapped levitated nanospheres.

  19. Programs for analysis and resizing of complex structures. [computerized minimum weight design

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Prasad, B.

    1978-01-01

    The paper describes the PARS (Programs for Analysis and Resizing of Structures) system. PARS is a user oriented system of programs for the minimum weight design of structures modeled by finite elements and subject to stress, displacement, flutter and thermal constraints. The system is built around SPAR - an efficient and modular general purpose finite element program, and consists of a series of processors that communicate through the use of a data base. An efficient optimizer based on the Sequence of Unconstrained Minimization Technique (SUMT) with an extended interior penalty function and Newton's method is used. Several problems are presented for demonstration of the system capabilities.

  20. Poster — Thur Eve — 69: Computational Study of DVH-guided Cancer Treatment Planning Optimization Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghomi, Pooyan Shirvani; Zinchenko, Yuriy

    2014-08-15

    Purpose: To compare methods to incorporate the Dose Volume Histogram (DVH) curves into the treatment planning optimization. Method: The performance of three methods, namely, the conventional Mixed Integer Programming (MIP) model, a convex moment-based constrained optimization approach, and an unconstrained convex moment-based penalty approach, is compared using anonymized data of a prostate cancer patient. Three plans we generated using the corresponding optimization models. Four Organs at Risk (OARs) and one Tumor were involved in the treatment planning. The OARs and Tumor were discretized into total of 50,221 voxels. The number of beamlets was 943. We used commercially available optimization softwaremore » Gurobi and Matlab to solve the models. Plan comparison was done by recording the model runtime followed by visual inspection of the resulting dose volume histograms. Conclusion: We demonstrate the effectiveness of the moment-based approaches to replicate the set of prescribed DVH curves. The unconstrained convex moment-based penalty approach is concluded to have the greatest potential to reduce the computational effort and holds a promise of substantial computational speed up.« less

  1. A LAGRANGIAN GAUSS-NEWTON-KRYLOV SOLVER FOR MASS- AND INTENSITY-PRESERVING DIFFEOMORPHIC IMAGE REGISTRATION.

    PubMed

    Mang, Andreas; Ruthotto, Lars

    2017-01-01

    We present an efficient solver for diffeomorphic image registration problems in the framework of Large Deformations Diffeomorphic Metric Mappings (LDDMM). We use an optimal control formulation, in which the velocity field of a hyperbolic PDE needs to be found such that the distance between the final state of the system (the transformed/transported template image) and the observation (the reference image) is minimized. Our solver supports both stationary and non-stationary (i.e., transient or time-dependent) velocity fields. As transformation models, we consider both the transport equation (assuming intensities are preserved during the deformation) and the continuity equation (assuming mass-preservation). We consider the reduced form of the optimal control problem and solve the resulting unconstrained optimization problem using a discretize-then-optimize approach. A key contribution is the elimination of the PDE constraint using a Lagrangian hyperbolic PDE solver. Lagrangian methods rely on the concept of characteristic curves. We approximate these curves using a fourth-order Runge-Kutta method. We also present an efficient algorithm for computing the derivatives of the final state of the system with respect to the velocity field. This allows us to use fast Gauss-Newton based methods. We present quickly converging iterative linear solvers using spectral preconditioners that render the overall optimization efficient and scalable. Our method is embedded into the image registration framework FAIR and, thus, supports the most commonly used similarity measures and regularization functionals. We demonstrate the potential of our new approach using several synthetic and real world test problems with up to 14.7 million degrees of freedom.

  2. Monolithic, multi-bandgap, tandem, ultra-thin, strain-counterbalanced, photovoltaic energy converters with optimal subcell bandgaps

    DOEpatents

    Wanlass, Mark W [Golden, CO; Mascarenhas, Angelo [Lakewood, CO

    2012-05-08

    Modeling a monolithic, multi-bandgap, tandem, solar photovoltaic converter or thermophotovoltaic converter by constraining the bandgap value for the bottom subcell to no less than a particular value produces an optimum combination of subcell bandgaps that provide theoretical energy conversion efficiencies nearly as good as unconstrained maximum theoretical conversion efficiency models, but which are more conducive to actual fabrication to achieve such conversion efficiencies than unconstrained model optimum bandgap combinations. Achieving such constrained or unconstrained optimum bandgap combinations includes growth of a graded layer transition from larger lattice constant on the parent substrate to a smaller lattice constant to accommodate higher bandgap upper subcells and at least one graded layer that transitions back to a larger lattice constant to accommodate lower bandgap lower subcells and to counter-strain the epistructure to mitigate epistructure bowing.

  3. Adiabatic quantum computation with neutral atoms via the Rydberg blockade

    NASA Astrophysics Data System (ADS)

    Goyal, Krittika; Deutsch, Ivan

    2011-05-01

    We study a trapped-neutral-atom implementation of the adiabatic model of quantum computation whereby the Hamiltonian of a set of interacting qubits is changed adiabatically so that its ground state evolves to the desired output of the algorithm. We employ the ``Rydberg blockade interaction,'' which previously has been used to implement two-qubit entangling gates in the quantum circuit model. Here it is employed via off-resonant virtual dressing of the excited levels, so that atoms always remain in the ground state. The resulting dressed-Rydberg interaction is insensitive to the distance between the atoms within a certain blockade radius, making this process robust to temperature and vibrational fluctuations. Single qubit interactions are implemented with global microwaves and atoms are locally addressed with light shifts. With these ingredients, we study a protocol to implement the two-qubit Quadratic Unconstrained Binary Optimization (QUBO) problem. We model atom trapping, addressing, coherent evolution, and decoherence. We also explore collective control of the many-atom system and generalize the QUBO problem to multiple qubits. We study a trapped-neutral-atom implementation of the adiabatic model of quantum computation whereby the Hamiltonian of a set of interacting qubits is changed adiabatically so that its ground state evolves to the desired output of the algorithm. We employ the ``Rydberg blockade interaction,'' which previously has been used to implement two-qubit entangling gates in the quantum circuit model. Here it is employed via off-resonant virtual dressing of the excited levels, so that atoms always remain in the ground state. The resulting dressed-Rydberg interaction is insensitive to the distance between the atoms within a certain blockade radius, making this process robust to temperature and vibrational fluctuations. Single qubit interactions are implemented with global microwaves and atoms are locally addressed with light shifts. With these ingredients, we study a protocol to implement the two-qubit Quadratic Unconstrained Binary Optimization (QUBO) problem. We model atom trapping, addressing, coherent evolution, and decoherence. We also explore collective control of the many-atom system and generalize the QUBO problem to multiple qubits. We acknowledge funding from the AQUARIUS project, Sandia National Laboratories

  4. Method of interplanetary trajectory optimization for the spacecraft with low thrust and swing-bys

    NASA Astrophysics Data System (ADS)

    Konstantinov, M. S.; Thein, M.

    2017-07-01

    The method developed to avoid the complexity of solving the multipoint boundary value problem while optimizing interplanetary trajectories of the spacecraft with electric propulsion and a sequence of swing-bys is presented in the paper. This method is based on the use of the preliminary problem solutions for the impulsive trajectories. The preliminary problem analyzed at the first stage of the study is formulated so that the analysis and optimization of a particular flight path is considered as the unconstrained minimum in the space of the selectable parameters. The existing methods can effectively solve this problem and make it possible to identify rational flight paths (the sequence of swing-bys) to receive the initial approximation for the main characteristics of the flight path (dates, values of the hyperbolic excess velocity, etc.). These characteristics can be used to optimize the trajectory of the spacecraft with electric propulsion. The special feature of the work is the introduction of the second (intermediate) stage of the research. At this stage some characteristics of the analyzed flight path (e.g. dates of swing-bys) are fixed and the problem is formulated so that the trajectory of the spacecraft with electric propulsion is optimized on selected sites of the flight path. The end-to-end optimization is carried out at the third (final) stage of the research. The distinctive feature of this stage is the analysis of the full set of optimal conditions for the considered flight path. The analysis of the characteristics of the optimal flight trajectories to Jupiter with Earth, Venus and Mars swing-bys for the spacecraft with electric propulsion are presented. The paper shows that the spacecraft weighing more than 7150 kg can be delivered into the vicinity of Jupiter along the trajectory with two Earth swing-bys by use of the space transportation system based on the "Angara A5" rocket launcher, the chemical upper stage "KVTK" and the electric propulsion system with input electrical power of 100 kW.

  5. 3D shape recovery of smooth surfaces: dropping the fixed-viewpoint assumption.

    PubMed

    Moses, Yael; Shimshoni, Ilan

    2009-07-01

    We present a new method for recovering the 3D shape of a featureless smooth surface from three or more calibrated images illuminated by different light sources (three of them are independent). This method is unique in its ability to handle images taken from unconstrained perspective viewpoints and unconstrained illumination directions. The correspondence between such images is hard to compute and no other known method can handle this problem locally from a small number of images. Our method combines geometric and photometric information in order to recover dense correspondence between the images and accurately computes the 3D shape. Only a single pass starting at one point and local computation are used. This is in contrast to methods that use the occluding contours recovered from many images to initialize and constrain an optimization process. The output of our method can be used to initialize such processes. In the special case of fixed viewpoint, the proposed method becomes a new perspective photometric stereo algorithm. Nevertheless, the introduction of the multiview setup, self-occlusions, and regions close to the occluding boundaries are better handled, and the method is more robust to noise than photometric stereo. Experimental results are presented for simulated and real images.

  6. Scope of Gradient and Genetic Algorithms in Multivariable Function Optimization

    NASA Technical Reports Server (NTRS)

    Shaykhian, Gholam Ali; Sen, S. K.

    2007-01-01

    Global optimization of a multivariable function - constrained by bounds specified on each variable and also unconstrained - is an important problem with several real world applications. Deterministic methods such as the gradient algorithms as well as the randomized methods such as the genetic algorithms may be employed to solve these problems. In fact, there are optimization problems where a genetic algorithm/an evolutionary approach is preferable at least from the quality (accuracy) of the results point of view. From cost (complexity) point of view, both gradient and genetic approaches are usually polynomial-time; there are no serious differences in this regard, i.e., the computational complexity point of view. However, for certain types of problems, such as those with unacceptably erroneous numerical partial derivatives and those with physically amplified analytical partial derivatives whose numerical evaluation involves undesirable errors and/or is messy, a genetic (stochastic) approach should be a better choice. We have presented here the pros and cons of both the approaches so that the concerned reader/user can decide which approach is most suited for the problem at hand. Also for the function which is known in a tabular form, instead of an analytical form, as is often the case in an experimental environment, we attempt to provide an insight into the approaches focusing our attention toward accuracy. Such an insight will help one to decide which method, out of several available methods, should be employed to obtain the best (least error) output. *

  7. Weighted SGD for ℓ p Regression with Randomized Preconditioning.

    PubMed

    Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W

    2016-01-01

    In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. We aim to bridge the gap between these two methods in solving constrained overdetermined linear regression problems-e.g., ℓ 2 and ℓ 1 regression problems. We propose a hybrid algorithm named pwSGD that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system.By rewriting a deterministic ℓ p regression problem as a stochastic optimization problem, we connect pwSGD to several existing ℓ p solvers including RLA methods with algorithmic leveraging (RLA for short).We prove that pwSGD inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity. Such SGD convergence rates are superior to other related SGD algorithm such as the weighted randomized Kaczmarz algorithm.Particularly, when solving ℓ 1 regression with size n by d , pwSGD returns an approximate solution with ε relative error in the objective value in (log n ·nnz( A )+poly( d )/ ε 2 ) time. This complexity is uniformly better than that of RLA methods in terms of both ε and d when the problem is unconstrained. In the presence of constraints, pwSGD only has to solve a sequence of much simpler and smaller optimization problem over the same constraints. In general this is more efficient than solving the constrained subproblem required in RLA.For ℓ 2 regression, pwSGD returns an approximate solution with ε relative error in the objective value and the solution vector measured in prediction norm in (log n ·nnz( A )+poly( d ) log(1/ ε )/ ε ) time. We show that for unconstrained ℓ 2 regression, this complexity is comparable to that of RLA and is asymptotically better over several state-of-the-art solvers in the regime where the desired accuracy ε , high dimension n and low dimension d satisfy d ≥ 1/ ε and n ≥ d 2 / ε . We also provide lower bounds on the coreset complexity for more general regression problems, indicating that still new ideas will be needed to extend similar RLA preconditioning ideas to weighted SGD algorithms for more general regression problems. Finally, the effectiveness of such algorithms is illustrated numerically on both synthetic and real datasets, and the results are consistent with our theoretical findings and demonstrate that pwSGD converges to a medium-precision solution, e.g., ε = 10 -3 , more quickly.

  8. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    NASA Astrophysics Data System (ADS)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  9. Constrained growth flips the direction of optimal phenological responses among annual plants.

    PubMed

    Lindh, Magnus; Johansson, Jacob; Bolmgren, Kjell; Lundström, Niklas L P; Brännström, Åke; Jonzén, Niclas

    2016-03-01

    Phenological changes among plants due to climate change are well documented, but often hard to interpret. In order to assess the adaptive value of observed changes, we study how annual plants with and without growth constraints should optimize their flowering time when productivity and season length changes. We consider growth constraints that depend on the plant's vegetative mass: self-shading, costs for nonphotosynthetic structural tissue and sibling competition. We derive the optimal flowering time from a dynamic energy allocation model using optimal control theory. We prove that an immediate switch (bang-bang control) from vegetative to reproductive growth is optimal with constrained growth and constant mortality. Increasing mean productivity, while keeping season length constant and growth unconstrained, delayed the optimal flowering time. When growth was constrained and productivity was relatively high, the optimal flowering time advanced instead. When the growth season was extended equally at both ends, the optimal flowering time was advanced under constrained growth and delayed under unconstrained growth. Our results suggests that growth constraints are key factors to consider when interpreting phenological flowering responses. It can help to explain phenological patterns along productivity gradients, and links empirical observations made on calendar scales with life-history theory. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.

  10. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  11. Weighted SGD for ℓp Regression with Randomized Preconditioning*

    PubMed Central

    Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W.

    2018-01-01

    In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. We aim to bridge the gap between these two methods in solving constrained overdetermined linear regression problems—e.g., ℓ2 and ℓ1 regression problems. We propose a hybrid algorithm named pwSGD that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system.By rewriting a deterministic ℓp regression problem as a stochastic optimization problem, we connect pwSGD to several existing ℓp solvers including RLA methods with algorithmic leveraging (RLA for short).We prove that pwSGD inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity. Such SGD convergence rates are superior to other related SGD algorithm such as the weighted randomized Kaczmarz algorithm.Particularly, when solving ℓ1 regression with size n by d, pwSGD returns an approximate solution with ε relative error in the objective value in 𝒪(log n·nnz(A)+poly(d)/ε2) time. This complexity is uniformly better than that of RLA methods in terms of both ε and d when the problem is unconstrained. In the presence of constraints, pwSGD only has to solve a sequence of much simpler and smaller optimization problem over the same constraints. In general this is more efficient than solving the constrained subproblem required in RLA.For ℓ2 regression, pwSGD returns an approximate solution with ε relative error in the objective value and the solution vector measured in prediction norm in 𝒪(log n·nnz(A)+poly(d) log(1/ε)/ε) time. We show that for unconstrained ℓ2 regression, this complexity is comparable to that of RLA and is asymptotically better over several state-of-the-art solvers in the regime where the desired accuracy ε, high dimension n and low dimension d satisfy d ≥ 1/ε and n ≥ d2/ε. We also provide lower bounds on the coreset complexity for more general regression problems, indicating that still new ideas will be needed to extend similar RLA preconditioning ideas to weighted SGD algorithms for more general regression problems. Finally, the effectiveness of such algorithms is illustrated numerically on both synthetic and real datasets, and the results are consistent with our theoretical findings and demonstrate that pwSGD converges to a medium-precision solution, e.g., ε = 10−3, more quickly. PMID:29782626

  12. Using Grey Wolf Algorithm to Solve the Capacitated Vehicle Routing Problem

    NASA Astrophysics Data System (ADS)

    Korayem, L.; Khorsid, M.; Kassem, S. S.

    2015-05-01

    The capacitated vehicle routing problem (CVRP) is a class of the vehicle routing problems (VRPs). In CVRP a set of identical vehicles having fixed capacities are required to fulfill customers' demands for a single commodity. The main objective is to minimize the total cost or distance traveled by the vehicles while satisfying a number of constraints, such as: the capacity constraint of each vehicle, logical flow constraints, etc. One of the methods employed in solving the CVRP is the cluster-first route-second method. It is a technique based on grouping of customers into a number of clusters, where each cluster is served by one vehicle. Once clusters are formed, a route determining the best sequence to visit customers is established within each cluster. The recently bio-inspired grey wolf optimizer (GWO), introduced in 2014, has proven to be efficient in solving unconstrained, as well as, constrained optimization problems. In the current research, our main contributions are: combining GWO with the traditional K-means clustering algorithm to generate the ‘K-GWO’ algorithm, deriving a capacitated version of the K-GWO algorithm by incorporating a capacity constraint into the aforementioned algorithm, and finally, developing 2 new clustering heuristics. The resulting algorithm is used in the clustering phase of the cluster-first route-second method to solve the CVR problem. The algorithm is tested on a number of benchmark problems with encouraging results.

  13. Experimental and simulation studies of multivariable adaptive optimization of continuous bioreactors using bilevel forgetting factors.

    PubMed

    Chang, Y K; Lim, H C

    1989-08-20

    A multivariable on-line adaptive optimization algorithm using a bilevel forgetting factor method was developed and applied to a continuous baker's yeast culture in simulation and experimental studies to maximize the cellular productivity by manipulating the dilution rate and the temperature. The algorithm showed a good optimization speed and a good adaptability and reoptimization capability. The algorithm was able to stably maintain the process around the optimum point for an extended period of time. Two cases were investigated: an unconstrained and a constrained optimization. In the constrained optimization the ethanol concentration was used as an index for the baking quality of yeast cells. An equality constraint with a quadratic penalty was imposed on the ethanol concentration to keep its level close to a hypothetical "optimum" value. The developed algorithm was experimentally applied to a baker's yeast culture to demonstrate its validity. Only unconstrained optimization was carried out experimentally. A set of tuning parameter values was suggested after evaluating the results from several experimental runs. With those tuning parameter values the optimization took 50-90 h. At the attained steady state the dilution rate was 0.310 h(-1) the temperature 32.8 degrees C, and the cellular productivity 1.50 g/L/h.

  14. New hybrid conjugate gradient methods with the generalized Wolfe line search.

    PubMed

    Xu, Xiao; Kong, Fan-Yu

    2016-01-01

    The conjugate gradient method was an efficient technique for solving the unconstrained optimization problem. In this paper, we made a linear combination with parameters β k of the DY method and the HS method, and putted forward the hybrid method of DY and HS. We also proposed the hybrid of FR and PRP by the same mean. Additionally, to present the two hybrid methods, we promoted the Wolfe line search respectively to compute the step size α k of the two hybrid methods. With the new Wolfe line search, the two hybrid methods had descent property and global convergence property of the two hybrid methods that can also be proved.

  15. A new nonlinear conjugate gradient coefficient under strong Wolfe-Powell line search

    NASA Astrophysics Data System (ADS)

    Mohamed, Nur Syarafina; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    A nonlinear conjugate gradient method (CG) plays an important role in solving a large-scale unconstrained optimization problem. This method is widely used due to its simplicity. The method is known to possess sufficient descend condition and global convergence properties. In this paper, a new nonlinear of CG coefficient βk is presented by employing the Strong Wolfe-Powell inexact line search. The new βk performance is tested based on number of iterations and central processing unit (CPU) time by using MATLAB software with Intel Core i7-3470 CPU processor. Numerical experimental results show that the new βk converge rapidly compared to other classical CG method.

  16. Preconditioning strategies for nonlinear conjugate gradient methods, based on quasi-Newton updates

    NASA Astrophysics Data System (ADS)

    Andrea, Caliciotti; Giovanni, Fasano; Massimo, Roma

    2016-10-01

    This paper reports two proposals of possible preconditioners for the Nonlinear Conjugate Gradient (NCG) method, in large scale unconstrained optimization. On one hand, the common idea of our preconditioners is inspired to L-BFGS quasi-Newton updates, on the other hand we aim at explicitly approximating in some sense the inverse of the Hessian matrix. Since we deal with large scale optimization problems, we propose matrix-free approaches where the preconditioners are built using symmetric low-rank updating formulae. Our distinctive new contributions rely on using information on the objective function collected as by-product of the NCG, at previous iterations. Broadly speaking, our first approach exploits the secant equation, in order to impose interpolation conditions on the objective function. In the second proposal we adopt and ad hoc modified-secant approach, in order to possibly guarantee some additional theoretical properties.

  17. Efficiency of unconstrained minimization techniques in nonlinear analysis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.; Knight, N. F., Jr.

    1978-01-01

    Unconstrained minimization algorithms have been critically evaluated for their effectiveness in solving structural problems involving geometric and material nonlinearities. The algorithms have been categorized as being zeroth, first, or second order depending upon the highest derivative of the function required by the algorithm. The sensitivity of these algorithms to the accuracy of derivatives clearly suggests using analytically derived gradients instead of finite difference approximations. The use of analytic gradients results in better control of the number of minimizations required for convergence to the exact solution.

  18. Split Bregman's optimization method for image construction in compressive sensing

    NASA Astrophysics Data System (ADS)

    Skinner, D.; Foo, S.; Meyer-Bäse, A.

    2014-05-01

    The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.

  19. A Path Algorithm for Constrained Estimation

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2013-01-01

    Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties!are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online. PMID:24039382

  20. CometBoards Users Manual Release 1.0

    NASA Technical Reports Server (NTRS)

    Guptill, James D.; Coroneos, Rula M.; Patnaik, Surya N.; Hopkins, Dale A.; Berke, Lazlo

    1996-01-01

    Several nonlinear mathematical programming algorithms for structural design applications are available at present. These include the sequence of unconstrained minimizations technique, the method of feasible directions, and the sequential quadratic programming technique. The optimality criteria technique and the fully utilized design concept are two other structural design methods. A project was undertaken to bring all these design methods under a common computer environment so that a designer can select any one of these tools that may be suitable for his/her application. To facilitate selection of a design algorithm, to validate and check out the computer code, and to ascertain the relative merits of the design tools, modest finite element structural analysis programs based on the concept of stiffness and integrated force methods have been coupled to each design method. The code that contains both these design and analysis tools, by reading input information from analysis and design data files, can cast the design of a structure as a minimum-weight optimization problem. The code can then solve it with a user-specified optimization technique and a user-specified analysis method. This design code is called CometBoards, which is an acronym for Comparative Evaluation Test Bed of Optimization and Analysis Routines for the Design of Structures. This manual describes for the user a step-by-step procedure for setting up the input data files and executing CometBoards to solve a structural design problem. The manual includes the organization of CometBoards; instructions for preparing input data files; the procedure for submitting a problem; illustrative examples; and several demonstration problems. A set of 29 structural design problems have been solved by using all the optimization methods available in CometBoards. A summary of the optimum results obtained for these problems is appended to this users manual. CometBoards, at present, is available for Posix-based Cray and Convex computers, Iris and Sun workstations, and the VM/CMS system.

  1. Constrained Versions of DEDICOM for Use in Unsupervised Part-Of-Speech Tagging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunlavy, Daniel; Chew, Peter A.

    This reports describes extensions of DEDICOM (DEcomposition into DIrectional COMponents) data models [3] that incorporate bound and linear constraints. The main purpose of these extensions is to investigate the use of improved data models for unsupervised part-of-speech tagging, as described by Chew et al. [2]. In that work, a single domain, two-way DEDICOM model was computed on a matrix of bigram fre- quencies of tokens in a corpus and used to identify parts-of-speech as an unsupervised approach to that problem. An open problem identi ed in that work was the com- putation of a DEDICOM model that more closely resembledmore » the matrices used in a Hidden Markov Model (HMM), speci cally through post-processing of the DEDICOM factor matrices. The work reported here consists of the description of several models that aim to provide a direct solution to that problem and a way to t those models. The approach taken here is to incorporate the model requirements as bound and lin- ear constrains into the DEDICOM model directly and solve the data tting problem as a constrained optimization problem. This is in contrast to the typical approaches in the literature, where the DEDICOM model is t using unconstrained optimization approaches, and model requirements are satis ed as a post-processing step.« less

  2. Image Registration and Data Assimilation as a QUBO on the D-Wave Quantum Annealer

    NASA Astrophysics Data System (ADS)

    Pelissier, C.; LeMoigne, J.; Halem, M.; Simpson, D. G.; Clune, T.

    2016-12-01

    The advent of the commercially available D-Wave quantum annealer has for the first time allowed investigations of the potential of quantum effects to efficiently carry out certain numerical tasks. The D-Wave computer was initially promoted as a tool to solve Quadratic Unconstrained Binary Optimization problems (QUBOs), but currently, it is also being used to generate the Boltzmann statistics required to train Restricted Boltzmann machines (RBMs). We consider the potential of this new architecture in performing numerical computations required to estimate terrestrial carbon fluxes from OCO-2 observations using the LIS model. The use of RBMs is being investigated in this work, but here we focus on the D-Wave as a QUBO solver, and it's potential to carry out image registration and data assimilation. QUBOs are formulated for both problems and results generated using the D-Wave 2Xtm at the NAS supercomputing facility are presented.

  3. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  4. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  5. Petermann I and II spot size: Accurate semi analytical description involving Nelder-Mead method of nonlinear unconstrained optimization and three parameter fundamental modal field

    NASA Astrophysics Data System (ADS)

    Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal

    2013-01-01

    A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.

  6. Indirect synthesis of multidegree-of-freedom transient systems

    NASA Technical Reports Server (NTRS)

    Chen, Y. H.; Pilkey, W. D.; Kalinowski, A. J.

    1976-01-01

    The indirect synthesis method is developed and shown to be capable of leading a near-optimal design of multidegree-of-freedom and multidesign-element transient nonlinear dynamical systems. The basis of the approach is to select the open design parameters such that the response of the portion of the system being designed approximates the limiting performances solution. The limiting performance problem can be formulated as one of linear programming by replacing all portions of the system subject to transient disturbances by control forces and supposing that the remaining portions are linear as are the overall kinematic constraints. One then selects the design parameters that respond most closely to the limiting performance solution, which can be achieved by unconstrained curve-fitting techniques.

  7. Adiabatic Quantum Computing via the Rydberg Blockade

    NASA Astrophysics Data System (ADS)

    Keating, Tyler; Goyal, Krittika; Deutsch, Ivan

    2012-06-01

    We study an architecture for implementing adiabatic quantum computation with trapped neutral atoms. Ground state atoms are dressed by laser fields in a manner conditional on the Rydberg blockade mechanism, thereby providing the requisite entangling interactions. As a benchmark we study the performance of a Quadratic Unconstrained Binary Optimization (QUBO) problem whose solution is found in the ground state spin configuration of an Ising-like model. We model a realistic architecture, including the effects of magnetic level structure, with qubits encoded into the clock states of ^133Cs, effective B-fields implemented through microwaves and light shifts, and atom-atom coupling achieved by excitation to a high-lying Rydberg level. Including the fundamental effects of photon scattering we find a high fidelity for the two-qubit implementation.

  8. Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions

    NASA Astrophysics Data System (ADS)

    Ilgen, Marc R.

    This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value of the flight path angle. A summary of performance results for all these guidance laws is presented in the fifth part of this thesis along with recommendations for further research.

  9. Path Following in the Exact Penalty Method of Convex Programming.

    PubMed

    Zhou, Hua; Lange, Kenneth

    2015-07-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.

  10. Path Following in the Exact Penalty Method of Convex Programming

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2015-01-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044

  11. Particle Swarm Optimization Toolbox

    NASA Technical Reports Server (NTRS)

    Grant, Michael J.

    2010-01-01

    The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry trajectory and guidance design for the Mars Science Laboratory mission but may be applied to any optimization problem.

  12. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.

  13. Atlas-Independent, Electrophysiological Mapping of the Optimal Locus of Subthalamic Deep Brain Stimulation for the Motor Symptoms of Parkinson Disease.

    PubMed

    Conrad, Erin C; Mossner, James M; Chou, Kelvin L; Patil, Parag G

    2018-05-23

    Deep brain stimulation (DBS) of the subthalamic nucleus (STN) improves motor symptoms of Parkinson disease (PD). However, motor outcomes can be variable, perhaps due to inconsistent positioning of the active contact relative to an unknown optimal locus of stimulation. Here, we determine the optimal locus of STN stimulation in a geometrically unconstrained, mathematically precise, and atlas-independent manner, using Unified Parkinson Disease Rating Scale (UPDRS) motor outcomes and an electrophysiological neuronal stimulation model. In 20 patients with PD, we mapped motor improvement to active electrode location, relative to the individual, directly MRI-visualized STN. Our analysis included a novel, unconstrained and computational electrical-field model of neuronal activation to estimate the optimal locus of DBS. We mapped the optimal locus to a tightly defined ovoid region 0.49 mm lateral, 0.88 mm posterior, and 2.63 mm dorsal to the anatomical midpoint of the STN. On average, this locus is 11.75 lateral, 1.84 mm posterior, and 1.08 mm ventral to the mid-commissural point. Our novel, atlas-independent method reveals a single, ovoid optimal locus of stimulation in STN DBS for PD. The methodology, here applied to UPDRS and PD, is generalizable to atlas-independent mapping of other motor and non-motor effects of DBS. © 2018 S. Karger AG, Basel.

  14. Locating Critical Circular and Unconstrained Failure Surface in Slope Stability Analysis with Tailored Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Pasik, Tomasz; van der Meij, Raymond

    2017-12-01

    This article presents an efficient search method for representative circular and unconstrained slip surfaces with the use of the tailored genetic algorithm. Searches for unconstrained slip planes with rigid equilibrium methods are yet uncommon in engineering practice, and little publications regarding truly free slip planes exist. The proposed method presents an effective procedure being the result of the right combination of initial population type, selection, crossover and mutation method. The procedure needs little computational effort to find the optimum, unconstrained slip plane. The methodology described in this paper is implemented using Mathematica. The implementation, along with further explanations, is fully presented so the results can be reproduced. Sample slope stability calculations are performed for four cases, along with a detailed result interpretation. Two cases are compared with analyses described in earlier publications. The remaining two are practical cases of slope stability analyses of dikes in Netherlands. These four cases show the benefits of analyzing slope stability with a rigid equilibrium method combined with a genetic algorithm. The paper concludes by describing possibilities and limitations of using the genetic algorithm in the context of the slope stability problem.

  15. A modified three-term PRP conjugate gradient algorithm for optimization models.

    PubMed

    Wu, Yanlin

    2017-01-01

    The nonlinear conjugate gradient (CG) algorithm is a very effective method for optimization, especially for large-scale problems, because of its low memory requirement and simplicity. Zhang et al. (IMA J. Numer. Anal. 26:629-649, 2006) firstly propose a three-term CG algorithm based on the well known Polak-Ribière-Polyak (PRP) formula for unconstrained optimization, where their method has the sufficient descent property without any line search technique. They proved the global convergence of the Armijo line search but this fails for the Wolfe line search technique. Inspired by their method, we will make a further study and give a modified three-term PRP CG algorithm. The presented method possesses the following features: (1) The sufficient descent property also holds without any line search technique; (2) the trust region property of the search direction is automatically satisfied; (3) the steplengh is bounded from below; (4) the global convergence will be established under the Wolfe line search. Numerical results show that the new algorithm is more effective than that of the normal method.

  16. Dai-Kou type conjugate gradient methods with a line search only using gradient.

    PubMed

    Huang, Yuanyuan; Liu, Changhe

    2017-01-01

    In this paper, the Dai-Kou type conjugate gradient methods are developed to solve the optimality condition of an unconstrained optimization, they only utilize gradient information and have broader application scope. Under suitable conditions, the developed methods are globally convergent. Numerical tests and comparisons with the PRP+ conjugate gradient method only using gradient show that the methods are efficient.

  17. Data Assimilation on a Quantum Annealing Computer: Feasibility and Scalability

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.; Halem, M.; Chapman, D. R.; Pelissier, C. S.

    2014-12-01

    Data assimilation is one of the ubiquitous and computationally hard problems in the Earth Sciences. In particular, ensemble-based methods require a large number of model evaluations to estimate the prior probability density over system states, and variational methods require adjoint calculations and iteration to locate the maximum a posteriori solution in the presence of nonlinear models and observation operators. Quantum annealing computers (QAC) like the new D-Wave housed at the NASA Ames Research Center can be used for optimization and sampling, and therefore offers a new possibility for efficiently solving hard data assimilation problems. Coding on the QAC is not straightforward: a problem must be posed as a Quadratic Unconstrained Binary Optimization (QUBO) and mapped to a spherical Chimera graph. We have developed a method for compiling nonlinear 4D-Var problems on the D-Wave that consists of five steps: Emulating the nonlinear model and/or observation function using radial basis functions (RBF) or Chebyshev polynomials. Truncating a Taylor series around each RBF kernel. Reducing the Taylor polynomial to a quadratic using ancilla gadgets. Mapping the real-valued quadratic to a fixed-precision binary quadratic. Mapping the fully coupled binary quadratic to a partially coupled spherical Chimera graph using ancilla gadgets. At present the D-Wave contains 512 qbits (with 1024 and 2048 qbit machines due in the next two years); this machine size allows us to estimate only 3 state variables at each satellite overpass. However, QAC's solve optimization problems using a physical (quantum) system, and therefore do not require iterations or calculation of model adjoints. This has the potential to revolutionize our ability to efficiently perform variational data assimilation, as the size of these computers grows in the coming years.

  18. EXADS - EXPERT SYSTEM FOR AUTOMATED DESIGN SYNTHESIS

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.

    1994-01-01

    The expert system called EXADS was developed to aid users of the Automated Design Synthesis (ADS) general purpose optimization program. Because of the general purpose nature of ADS, it is difficult for a nonexpert to select the best choice of strategy, optimizer, and one-dimensional search options from the one hundred or so combinations that are available. EXADS aids engineers in determining the best combination based on their knowledge of the problem and the expert knowledge previously stored by experts who developed ADS. EXADS is a customized application of the AESOP artificial intelligence program (the general version of AESOP is available separately from COSMIC. The ADS program is also available from COSMIC.) The expert system consists of two main components. The knowledge base contains about 200 rules and is divided into three categories: constrained, unconstrained, and constrained treated as unconstrained. The EXADS inference engine is rule-based and makes decisions about a particular situation using hypotheses (potential solutions), rules, and answers to questions drawn from the rule base. EXADS is backward-chaining, that is, it works from hypothesis to facts. The rule base was compiled from sources such as literature searches, ADS documentation, and engineer surveys. EXADS will accept answers such as yes, no, maybe, likely, and don't know, or a certainty factor ranging from 0 to 10. When any hypothesis reaches a confidence level of 90% or more, it is deemed as the best choice and displayed to the user. If no hypothesis is confirmed, the user can examine explanations of why the hypotheses failed to reach the 90% level. The IBM PC version of EXADS is written in IQ-LISP for execution under DOS 2.0 or higher with a central memory requirement of approximately 512K of 8 bit bytes. This program was developed in 1986.

  19. Structural optimization via a design space hierarchy

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1976-01-01

    Mathematical programming techniques provide a general approach to automated structural design. An iterative method is proposed in which design is treated as a hierarchy of subproblems, one being locally constrained and the other being locally unconstrained. It is assumed that the design space is locally convex in the case of good initial designs and that the objective and constraint functions are continuous, with continuous first derivatives. A general design algorithm is outlined for finding a move direction which will decrease the value of the objective function while maintaining a feasible design. The case of one-dimensional search in a two-variable design space is discussed. Possible applications are discussed. A major feature of the proposed algorithm is its application to problems which are inherently ill-conditioned, such as design of structures for optimum geometry.

  20. The use of single-date MODIS imagery for estimating large-scale urban impervious surface fraction with spectral mixture analysis and machine learning techniques

    NASA Astrophysics Data System (ADS)

    Deng, Chengbin; Wu, Changshan

    2013-12-01

    Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.

  1. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu; Phanish, Deepa

    We present an Augmented Lagrangian formulation and its real-space implementation for non-periodic Orbital-Free Density Functional Theory (OF-DFT) calculations. In particular, we rewrite the constrained minimization problem of OF-DFT as a sequence of minimization problems without any constraint, thereby making it amenable to powerful unconstrained optimization algorithms. Further, we develop a parallel implementation of this approach for the Thomas–Fermi–von Weizsacker (TFW) kinetic energy functional in the framework of higher-order finite-differences and the conjugate gradient method. With this implementation, we establish that the Augmented Lagrangian approach is highly competitive compared to the penalty and Lagrange multiplier methods. Additionally, we show that higher-ordermore » finite-differences represent a computationally efficient discretization for performing OF-DFT simulations. Overall, we demonstrate that the proposed formulation and implementation are both efficient and robust by studying selected examples, including systems consisting of thousands of atoms. We validate the accuracy of the computed energies and forces by comparing them with those obtained by existing plane-wave methods.« less

  3. Experiential effects on mirror systems and social learning: implications for social intelligence.

    PubMed

    Reader, Simon M

    2014-04-01

    Investigations of biases and experiential effects on social learning, social information use, and mirror systems can usefully inform one another. Unconstrained learning is predicted to shape mirror systems when the optimal response to an observed act varies, but constraints may emerge when immediate error-free responses are required and evolutionary or developmental history reliably predicts the optimal response. Given the power of associative learning, such constraints may be rare.

  4. An effective parameter optimization with radiation balance constraints in the CAM5

    NASA Astrophysics Data System (ADS)

    Wu, L.; Zhang, T.; Qin, Y.; Lin, Y.; Xue, W.; Zhang, M.

    2017-12-01

    Uncertain parameters in physical parameterizations of General Circulation Models (GCMs) greatly impact model performance. Traditional parameter tuning methods are mostly unconstrained optimization, leading to the simulation results with optimal parameters may not meet the conditions that models have to keep. In this study, the radiation balance constraint is taken as an example, which is involved in the automatic parameter optimization procedure. The Lagrangian multiplier method is used to solve this optimization problem with constrains. In our experiment, we use CAM5 atmosphere model under 5-yr AMIP simulation with prescribed seasonal climatology of SST and sea ice. We consider the synthesized metrics using global means of radiation, precipitation, relative humidity, and temperature as the goal of optimization, and simultaneously consider the conditions that FLUT and FSNTOA should satisfy as constraints. The global average of the output variables FLUT and FSNTOA are set to be approximately equal to 240 Wm-2 in CAM5. Experiment results show that the synthesized metrics is 13.6% better than the control run. At the same time, both FLUT and FSNTOA are close to the constrained conditions. The FLUT condition is well satisfied, which is obviously better than the average annual FLUT obtained with the default parameters. The FSNTOA has a slight deviation from the observed value, but the relative error is less than 7.7‰.

  5. Conceptual/preliminary design study of subsonic v/stol and stovl aircraft derivatives of the S-3A

    NASA Technical Reports Server (NTRS)

    Kidwell, G. H., Jr.

    1981-01-01

    A computerized aircraft synthesis program was used to examine the feasibility and capability of a V/STOL aircraft based on the Navy S-3A aircraft. Two major airframe modifications are considered: replacement of the wing, and substitution of deflected thrust turbofan engines similar to the Pegasus engine. Three planform configurations for the all composite wing were investigated: an unconstrained span design, a design with the span constrained to 64 feet, and an unconstrained span oblique wing design. Each design was optimized using the same design variables, and performance and control analyses were performed. The oblique wing configuration was found to have the greatest potential in this application. The mission performance of these V/STOL aircraft compares favorably with that of the CTOL S-3A.

  6. Optimal transfers between libration-point orbits in the elliptic restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Hiday, Lisa Ann

    1992-09-01

    A strategy is formulated to design optimal impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior L(1) libration point of the Sun-Earth/Moon barycenter system. Two methods of constructing nominal transfers, for which the fuel cost is to be minimized, are developed; both inferior and superior transfers between two halo orbits are considered. The necessary conditions for an optimal transfer trajectory are stated in terms of the primer vector. The adjoint equation relating reference and perturbed trajectories in this formulation of the elliptic restricted three-body problem is shown to be distinctly different from that obtained in the analysis of trajectories in the two-body problem. Criteria are established whereby the cost on a nominal transfer can be improved by the addition of an interior impulse or by the implementation of coastal arcs in the initial and final orbits. The necessary conditions for the local optimality of a time-fixed transfer trajectory possessing additional impulses are satisfied by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses. The optimality of a time-free transfer containing coastal arcs is surmised by examination of the slopes at the endpoints of a plot of the magnitude of the primer vector over the duration of the transfer path. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The position and timing of each interior impulse applied to a time-fixed transfer as well as the direction and length of coastal periods implemented on a time-free transfer are specified by the unconstrained minimization of the appropriate variation in cost utilizing a multivariable search technique. Although optimal solutions in some instances are elusive, the time-fixed and time-free optimization algorithms prove to be very successful in diminishing costs on nominal transfer trajectories. The inclusion of coastal arcs on time-free superior and inferior transfers results in significant modification of the transfer time of flight caused by shifts in departure and arrival locations on the halo orbits.

  7. Finding Maximum Cliques on the D-Wave Quantum Annealer

    DOE PAGES

    Chapuis, Guillaume; Djidjev, Hristo; Hahn, Georg; ...

    2018-05-03

    This work assesses the performance of the D-Wave 2X (DW) quantum annealer for finding a maximum clique in a graph, one of the most fundamental and important NP-hard problems. Because the size of the largest graphs DW can directly solve is quite small (usually around 45 vertices), we also consider decomposition algorithms intended for larger graphs and analyze their performance. For smaller graphs that fit DW, we provide formulations of the maximum clique problem as a quadratic unconstrained binary optimization (QUBO) problem, which is one of the two input types (together with the Ising model) acceptable by the machine, andmore » compare several quantum implementations to current classical algorithms such as simulated annealing, Gurobi, and third-party clique finding heuristics. We further estimate the contributions of the quantum phase of the quantum annealer and the classical post-processing phase typically used to enhance each solution returned by DW. We demonstrate that on random graphs that fit DW, no quantum speedup can be observed compared with the classical algorithms. On the other hand, for instances specifically designed to fit well the DW qubit interconnection network, we observe substantial speed-ups in computing time over classical approaches.« less

  8. Finding Maximum Cliques on the D-Wave Quantum Annealer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapuis, Guillaume; Djidjev, Hristo; Hahn, Georg

    This work assesses the performance of the D-Wave 2X (DW) quantum annealer for finding a maximum clique in a graph, one of the most fundamental and important NP-hard problems. Because the size of the largest graphs DW can directly solve is quite small (usually around 45 vertices), we also consider decomposition algorithms intended for larger graphs and analyze their performance. For smaller graphs that fit DW, we provide formulations of the maximum clique problem as a quadratic unconstrained binary optimization (QUBO) problem, which is one of the two input types (together with the Ising model) acceptable by the machine, andmore » compare several quantum implementations to current classical algorithms such as simulated annealing, Gurobi, and third-party clique finding heuristics. We further estimate the contributions of the quantum phase of the quantum annealer and the classical post-processing phase typically used to enhance each solution returned by DW. We demonstrate that on random graphs that fit DW, no quantum speedup can be observed compared with the classical algorithms. On the other hand, for instances specifically designed to fit well the DW qubit interconnection network, we observe substantial speed-ups in computing time over classical approaches.« less

  9. MM Algorithms for Geometric and Signomial Programming

    PubMed Central

    Lange, Kenneth; Zhou, Hua

    2013-01-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545

  10. MM Algorithms for Geometric and Signomial Programming.

    PubMed

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  11. Accuracy in breast shape alignment with 3D surface fitting algorithms.

    PubMed

    Riboldi, Marco; Gierga, David P; Chen, George T Y; Baroni, Guido

    2009-04-01

    Surface imaging is in use in radiotherapy clinical practice for patient setup optimization and monitoring. Breast alignment is accomplished by searching for a tentative spatial correspondence between the reference and daily surface shape models. In this study, the authors quantify whole breast shape alignment by relying on texture features digitized on 3D surface models. Texture feature localization was validated through repeated measurements in a silicone breast phantom, mounted on a high precision mechanical stage. Clinical investigations on breast shape alignment included 133 fractions in 18 patients treated with accelerated partial breast irradiation. The breast shape was detected with a 3D video based surface imaging system so that breathing was compensated. An in-house algorithm for breast alignment, based on surface fitting constrained by nipple matching (constrained surface fitting), was applied. Results were compared with a commercial software where no constraints are utilized (unconstrained surface fitting). Texture feature localization was validated within 2 mm in each anatomical direction. Clinical data show that unconstrained surface fitting achieves adequate accuracy in most cases, though nipple mismatch is considerably higher than residual surface distances (3.9 mm vs 0.6 mm on average). Outliers beyond 1 cm can be experienced as the result of a degenerate surface fit, where unconstrained surface fitting is not sufficient to establish spatial correspondence. In the constrained surface fitting algorithm, average surface mismatch within 1 mm was obtained when nipple position was forced to match in the [1.5; 5] mm range. In conclusion, optimal results can be obtained by trading off the desired overall surface congruence vs matching of selected landmarks (constraint). Constrained surface fitting is put forward to represent an improvement in setup accuracy for those applications where whole breast positional reproducibility is an issue.

  12. A numerical approach to controller design for the ACES facility

    NASA Technical Reports Server (NTRS)

    Frazier, W. Garth; Irwin, R. Dennis

    1993-01-01

    In recent years the employment of active control techniques for improving the performance of systems involving highly flexible structures has become a topic of considerable research interest. Most of these systems are quite complicated, using multiple actuators and sensors, and possessing high order models. The majority of analytical controller synthesis procedures capable of handling multivariable systems in a systematic way require considerable insight into the underlying mathematical theory to achieve a successful design. This insight is needed in selecting the proper weighting matrices or weighting functions to cast what is naturally a multiple constraint satisfaction problem into an unconstrained optimization problem. Although designers possessing considerable experience with these techniques have a feel for the proper choice of weights, others may spend a significant amount of time attempting to find an acceptable solution. Another disadvantage of such procedures is that the resulting controller has an order greater than or equal to that of the model used for the design. Of course, the order of these controllers can often be reduced, but again this requires a good understanding of the theory involved.

  13. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.

  14. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.

  15. Regularized minimum I-divergence methods for the inverse blackbody radiation problem

    NASA Astrophysics Data System (ADS)

    Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin

    2006-08-01

    This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.

  16. The cost of noise reduction in commercial tilt rotor aircraft

    NASA Technical Reports Server (NTRS)

    Faulkner, H. B.

    1974-01-01

    The relationship between direct operating cost (DOC) and departure noise annoyance was developed for commercial tilt rotor aircraft. This was accomplished by generating a series of tilt rotor aircraft designs to meet various noise goals at minimum DOC. These vehicles were spaced across the spectrum of possible noise levels from completely unconstrained to the quietest vehicle that could be designed within the study ground rules. A group of optimization parameters were varied to find the minimum DOC while other inputs were held constant and some external constraints were met. This basic variation was then extended to different aircraft sizes and technology time frames. It was concluded that reducing noise annoyance by designing for lower rotor tip speeds is a very promising avenue for future research and development. It appears that the cost of halving the annoyance compared to an unconstrained design is insignificant and the cost of halving the annoyance again is small.

  17. Semismooth Newton method for gradient constrained minimization problem

    NASA Astrophysics Data System (ADS)

    Anyyeva, Serbiniyaz; Kunisch, Karl

    2012-08-01

    In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.

  18. Identification of terms to define unconstrained air transportation demands

    NASA Technical Reports Server (NTRS)

    Jacobson, I. D.; Kuhilhau, A. R.

    1982-01-01

    The factors involved in the evaluation of unconstrained air transportation systems were carefully analyzed. By definition an unconstrained system is taken to be one in which the design can employ innovative and advanced concepts no longer limited by present environmental, social, political or regulatory settings. Four principal evaluation criteria are involved: (1) service utilization, based on the operating performance characteristics as viewed by potential patrons; (2) community impacts, reflecting decisions based on the perceived impacts of the system; (3) technological feasibility, estimating what is required to reduce the system to practice; and (4) financial feasibility, predicting the ability of the concepts to attract financial support. For each of these criteria, a set of terms or descriptors was identified, which should be used in the evaluation to render it complete. It is also demonstrated that these descriptors have the following properties: (a) their interpretation may be made by different groups of evaluators; (b) their interpretations and the way they are used may depend on the stage of development of the system in which they are used; (c) in formulating the problem, all descriptors should be addressed independent of the evaluation technique selected.

  19. H2, fixed architecture, control design for large scale systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1990-01-01

    The H2, fixed architecture, control problem is a classic linear quadratic Gaussian (LQG) problem whose solution is constrained to be a linear time invariant compensator with a decentralized processing structure. The compensator can be made of p independent subcontrollers, each of which has a fixed order and connects selected sensors to selected actuators. The H2, fixed architecture, control problem allows the design of simplified feedback systems needed to control large scale systems. Its solution becomes more complicated, however, as more constraints are introduced. This work derives the necessary conditions for optimality for the problem and studies their properties. It is found that the filter and control problems couple when the architecture constraints are introduced, and that the different subcontrollers must be coordinated in order to achieve global system performance. The problem requires the simultaneous solution of highly coupled matrix equations. The use of homotopy is investigated as a numerical tool, and its convergence properties studied. It is found that the general constrained problem may have multiple stabilizing solutions, and that these solutions may be local minima or saddle points for the quadratic cost. The nature of the solution is not invariant when the parameters of the system are changed. Bifurcations occur, and a solution may continuously transform into a nonstabilizing compensator. Using a modified homotopy procedure, fixed architecture compensators are derived for models of large flexible structures to help understand the properties of the constrained solutions and compare them to the corresponding unconstrained ones.

  20. Limited-memory trust-region methods for sparse relaxation

    NASA Astrophysics Data System (ADS)

    Adhikari, Lasith; DeGuchy, Omar; Erway, Jennifer B.; Lockhart, Shelby; Marcia, Roummel F.

    2017-08-01

    In this paper, we solve the l2-l1 sparse recovery problem by transforming the objective function of this problem into an unconstrained differentiable function and applying a limited-memory trust-region method. Unlike gradient projection-type methods, which uses only the current gradient, our approach uses gradients from previous iterations to obtain a more accurate Hessian approximation. Numerical experiments show that our proposed approach eliminates spurious solutions more effectively while improving computational time.

  1. Advanced fitness landscape analysis and the performance of memetic algorithms.

    PubMed

    Merz, Peter

    2004-01-01

    Memetic algorithms (MAs) have demonstrated very effective in combinatorial optimization. This paper offers explanations as to why this is so by investigating the performance of MAs in terms of efficiency and effectiveness. A special class of MAs is used to discuss efficiency and effectiveness for local search and evolutionary meta-search. It is shown that the efficiency of MAs can be increased drastically with the use of domain knowledge. However, effectiveness highly depends on the structure of the problem. As is well-known, identifying this structure is made easier with the notion of fitness landscapes: the local properties of the fitness landscape strongly influence the effectiveness of the local search while the global properties strongly influence the effectiveness of the evolutionary meta-search. This paper also introduces new techniques for analyzing the fitness landscapes of combinatorial problems; these techniques focus on the investigation of random walks in the fitness landscape starting at locally optimal solutions as well as on the escape from the basins of attractions of current local optima. It is shown for NK-landscapes and landscapes of the unconstrained binary quadratic programming problem (BQP) that a random walk to another local optimum can be used to explain the efficiency of recombination in comparison to mutation. Moreover, the paper shows that other aspects like the size of the basins of attractions of local optima are important for the efficiency of MAs and a local search escape analysis is proposed. These simple analysis techniques have several advantages over previously proposed statistical measures and provide valuable insight into the behaviour of MAs on different kinds of landscapes.

  2. σ -SCF: A Direct Energy-targeting Method To Mean-field Excited States

    NASA Astrophysics Data System (ADS)

    Ye, Hongzhou; Welborn, Matthew; Ricke, Nathan; van Voorhis, Troy

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g. Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF, tend to fall into the lowest solution consistent with a given symmetry - a problem known as ``variational collapse''. In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states - ground or excited - are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). This work was funded by a Grant from NSF (CHE-1464804).

  3. An Algorithm for the Weighted Earliness-Tardiness Unconstrained Project Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Afshar Nadjafi, Behrouz; Shadrokh, Shahram

    This research considers a project scheduling problem with the object of minimizing weighted earliness-tardiness penalty costs, taking into account a deadline for the project and precedence relations among the activities. An exact recursive method has been proposed for solving the basic form of this problem. We present a new depth-first branch and bound algorithm for extended form of the problem, which time value of money is taken into account by discounting the cash flows. The algorithm is extended with two bounding rules in order to reduce the size of the branch and bound tree. Finally, some test problems are solved and computational results are reported.

  4. Adiabatic Quantum Computing with Neutral Atoms

    NASA Astrophysics Data System (ADS)

    Hankin, Aaron; Biedermann, Grant; Burns, George; Jau, Yuan-Yu; Johnson, Cort; Kemme, Shanalyn; Landahl, Andrew; Mangan, Michael; Parazzoli, L. Paul; Schwindt, Peter; Armstrong, Darrell

    2012-06-01

    We are developing, both theoretically and experimentally, a neutral atom qubit approach to adiabatic quantum computation. Using our microfabricated diffractive optical elements, we plan to implement an array of optical traps for cesium atoms and use Rydberg-dressed ground states to provide a controlled atom-atom interaction. We will develop this experimental capability to generate a two-qubit adiabatic evolution aimed specifically toward demonstrating the two-qubit quadratic unconstrained binary optimization (QUBO) routine.

  5. Constrained minimization of smooth functions using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  6. The phase transition of matrix recovery from Gaussian measurements matches the minimax MSE of matrix denoising.

    PubMed

    Donoho, David L; Gavish, Matan; Montanari, Andrea

    2013-05-21

    Let X(0) be an unknown M by N matrix. In matrix recovery, one takes n < MN linear measurements y(1),…,y(n) of X(0), where y(i) = Tr(A(T)iX(0)) and each A(i) is an M by N matrix. A popular approach for matrix recovery is nuclear norm minimization (NNM): solving the convex optimization problem min ||X||*subject to y(i) =Tr(A(T)(i)X) for all 1 ≤ i ≤ n, where || · ||* denotes the nuclear norm, namely, the sum of singular values. Empirical work reveals a phase transition curve, stated in terms of the undersampling fraction δ(n,M,N) = n/(MN), rank fraction ρ=rank(X0)/min {M,N}, and aspect ratio β=M/N. Specifically when the measurement matrices Ai have independent standard Gaussian random entries, a curve δ*(ρ) = δ*(ρ;β) exists such that, if δ > δ*(ρ), NNM typically succeeds for large M,N, whereas if δ < δ*(ρ), it typically fails. An apparently quite different problem is matrix denoising in Gaussian noise, in which an unknown M by N matrix X(0) is to be estimated based on direct noisy measurements Y =X(0) + Z, where the matrix Z has independent and identically distributed Gaussian entries. A popular matrix denoising scheme solves the unconstrained optimization problem min|| Y-X||(2)(F)/2+λ||X||*. When optimally tuned, this scheme achieves the asymptotic minimax mean-squared error M(ρ;β) = lim(M,N → ∞)inf(λ)sup(rank(X) ≤ ρ · M)MSE(X,X(λ)), where M/N → . We report extensive experiments showing that the phase transition δ*(ρ) in the first problem, matrix recovery from Gaussian measurements, coincides with the minimax risk curve M(ρ)=M(ρ;β) in the second problem, matrix denoising in Gaussian noise: δ*(ρ)=M(ρ), for any rank fraction 0 < ρ < 1 (at each common aspect ratio β). Our experiments considered matrices belonging to two constraint classes: real M by N matrices, of various ranks and aspect ratios, and real symmetric positive-semidefinite N by N matrices, of various ranks.

  7. Improvement of the cruise performances of a wing by means of aerodynamic optimization. Validation with a Far-Field method

    NASA Astrophysics Data System (ADS)

    Jiménez-Varona, J.; Ponsin Roca, J.

    2015-06-01

    Under a contract with AIRBUS MILITARY (AI-M), an exercise to analyze the potential of optimization techniques to improve the wing performances at cruise conditions has been carried out by using an in-house design code. The original wing was provided by AI-M and several constraints were posed for the redesign. To maximize the aerodynamic efficiency at cruise, optimizations were performed using the design techniques developed internally at INTA under a research program (Programa de Termofluidodinámica). The code is a gradient-based optimizaa tion code, which uses classical finite differences approach for gradient computations. Several techniques for search direction computation are implemented for unconstrained and constrained problems. Techniques for geometry modifications are based on different approaches which include perturbation functions for the thickness and/or mean line distributions and others by Bézier curves fitting of certain degree. It is very e important to afford a real design which involves several constraints that reduce significantly the feasible design space. And the assessment of the code is needed in order to check the capabilities and the possible drawbacks. Lessons learnt will help in the development of future enhancements. In addition, the validation of the results was done using also the well-known TAU flow solver and a far-field drag method in order to determine accurately the improvement in terms of drag counts.

  8. [Tripolar cups].

    PubMed

    Fink, B

    2015-04-01

    Tripolar cups can be separated into constrained and unconstrained dual-mobility cups. The latter show better survival and revision rates. The main problem is the polyethylene wear. Therefore modern types of polyethylene are used in these cups. The indications for dual-mobility cups are recurrent dislocation and situations where the risk of dislocation is increased. Georg Thieme Verlag KG Stuttgart · New York.

  9. Hamilton's Equations with Euler Parameters for Rigid Body Dynamics Modeling. Chapter 3

    NASA Technical Reports Server (NTRS)

    Shivarama, Ravishankar; Fahrenthold, Eric P.

    2004-01-01

    A combination of Euler parameter kinematics and Hamiltonian mechanics provides a rigid body dynamics model well suited for use in strongly nonlinear problems involving arbitrarily large rotations. The model is unconstrained, free of singularities, includes a general potential energy function and a minimum set of momentum variables, and takes an explicit state space form convenient for numerical implementation. The general formulation may be specialized to address particular applications, as illustrated in several three dimensional example problems.

  10. A new modified conjugate gradient coefficient for solving system of linear equations

    NASA Astrophysics Data System (ADS)

    Hajar, N.; ‘Aini, N.; Shapiee, N.; Abidin, Z. Z.; Khadijah, W.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is an evolution of computational method in solving unconstrained optimization problems. This approach is easy to implement due to its simplicity and has been proven to be effective in solving real-life application. Although this field has received copious amount of attentions in recent years, some of the new approaches of CG algorithm cannot surpass the efficiency of the previous versions. Therefore, in this paper, a new CG coefficient which retains the sufficient descent and global convergence properties of the original CG methods is proposed. This new CG is tested on a set of test functions under exact line search. Its performance is then compared to that of some of the well-known previous CG methods based on number of iterations and CPU time. The results show that the new CG algorithm has the best efficiency amongst all the methods tested. This paper also includes an application of the new CG algorithm for solving large system of linear equations

  11. Adiabatic Quantum Computation with Neutral Atoms

    NASA Astrophysics Data System (ADS)

    Biedermann, Grant

    2013-03-01

    We are implementing a new platform for adiabatic quantum computation (AQC)[2] based on trapped neutral atoms whose coupling is mediated by the dipole-dipole interactions of Rydberg states. Ground state cesium atoms are dressed by laser fields in a manner conditional on the Rydberg blockade mechanism,[3,4] thereby providing the requisite entangling interactions. As a benchmark we study a Quadratic Unconstrained Binary Optimization (QUBO) problem whose solution is found in the ground state spin configuration of an Ising-like model. In collaboration with Lambert Parazzoli, Sandia National Laboratories; Aaron Hankin, Center for Quantum Information and Control (CQuIC), University of New Mexico; James Chin-Wen Chou, Yuan-Yu Jau, Peter Schwindt, Cort Johnson, and George Burns, Sandia National Laboratories; Tyler Keating, Krittika Goyal, and Ivan Deutsch, Center for Quantum Information and Control (CQuIC), University of New Mexico; and Andrew Landahl, Sandia National Laboratories. This work was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories

  12. Updating QR factorization procedure for solution of linear least squares problem with equality constraints.

    PubMed

    Zeb, Salman; Yousaf, Muhammad

    2017-01-01

    In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.

  13. [Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].

    PubMed

    Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing

    2003-12-01

    Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.

  14. Research on design method of the full form ship with minimum thrust deduction factor

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-ji; Miao, Ai-qin; Zhang, Zhu-xin

    2015-04-01

    In the preliminary design stage of the full form ships, in order to obtain a hull form with low resistance and maximum propulsion efficiency, an optimization design program for a full form ship with the minimum thrust deduction factor has been developed, which combined the potential flow theory and boundary layer theory with the optimization technique. In the optimization process, the Sequential Unconstrained Minimization Technique (SUMT) interior point method of Nonlinear Programming (NLP) was proposed with the minimum thrust deduction factor as the objective function. An appropriate displacement is a basic constraint condition, and the boundary layer separation is an additional one. The parameters of the hull form modification function are used as design variables. At last, the numerical optimization example for lines of after-body of 50000 DWT product oil tanker was provided, which indicated that the propulsion efficiency was improved distinctly by this optimal design method.

  15. Optimizing an Actuator Array for the Control of Multi-Frequency Noise in Aircraft Interiors

    NASA Technical Reports Server (NTRS)

    Palumbo, D. L.; Padula, S. L.

    1997-01-01

    Techniques developed for selecting an optimized actuator array for interior noise reduction at a single frequency are extended to the multi-frequency case. Transfer functions for 64 actuators were obtained at 5 frequencies from ground testing the rear section of a fully trimmed DC-9 fuselage. A single loudspeaker facing the left side of the aircraft was the primary source. A combinatorial search procedure (tabu search) was employed to find optimum actuator subsets of from 2 to 16 actuators. Noise reduction predictions derived from the transfer functions were used as a basis for evaluating actuator subsets during optimization. Results indicate that it is necessary to constrain actuator forces during optimization. Unconstrained optimizations selected actuators which require unrealistically large forces. Two methods of constraint are evaluated. It is shown that a fast, but approximate, method yields results equivalent to an accurate, but computationally expensive, method.

  16. Some new results on the central overlap problem in astrometry

    NASA Astrophysics Data System (ADS)

    Rapaport, M.

    1998-07-01

    The central overlap problem in astrometry has been revisited in the recent last years by Eichhorn (1988) who explicitly inverted the matrix of a constrained least squares problem. In this paper, the general explicit solution of the unconstrained central overlap problem is given. We also give the explicit solution for an other set of constraints; this result is a confirmation of a conjecture expressed by Eichhorn (1988). We also consider the use of iterative methods to solve the central overlap problem. A surprising result is obtained when the classical Gauss Seidel method is used; the iterations converge immediately to the general solution of the equations; we explain this property writing the central overlap problem in a new set of variables.

  17. Modeling Latent Interactions at Level 2 in Multilevel Structural Equation Models: An Evaluation of Mean-Centered and Residual-Centered Unconstrained Approaches

    ERIC Educational Resources Information Center

    Leite, Walter L.; Zuo, Youzhen

    2011-01-01

    Among the many methods currently available for estimating latent variable interactions, the unconstrained approach is attractive to applied researchers because of its relatively easy implementation with any structural equation modeling (SEM) software. Using a Monte Carlo simulation study, we extended and evaluated the unconstrained approach to…

  18. Hybrid genetic algorithm with an adaptive penalty function for fitting multimodal experimental data: application to exchange-coupled non-Kramers binuclear iron active sites.

    PubMed

    Beaser, Eric; Schwartz, Jennifer K; Bell, Caleb B; Solomon, Edward I

    2011-09-26

    A Genetic Algorithm (GA) is a stochastic optimization technique based on the mechanisms of biological evolution. These algorithms have been successfully applied in many fields to solve a variety of complex nonlinear problems. While they have been used with some success in chemical problems such as fitting spectroscopic and kinetic data, many have avoided their use due to the unconstrained nature of the fitting process. In engineering, this problem is now being addressed through incorporation of adaptive penalty functions, but their transfer to other fields has been slow. This study updates the Nanakorrn Adaptive Penalty function theory, expanding its validity beyond maximization problems to minimization as well. The expanded theory, using a hybrid genetic algorithm with an adaptive penalty function, was applied to analyze variable temperature variable field magnetic circular dichroism (VTVH MCD) spectroscopic data collected on exchange coupled Fe(II)Fe(II) enzyme active sites. The data obtained are described by a complex nonlinear multimodal solution space with at least 6 to 13 interdependent variables and are costly to search efficiently. The use of the hybrid GA is shown to improve the probability of detecting the global optimum. It also provides large gains in computational and user efficiency. This method allows a full search of a multimodal solution space, greatly improving the quality and confidence in the final solution obtained, and can be applied to other complex systems such as fitting of other spectroscopic or kinetics data.

  19. Wind Farm Layout Optimization through a Crossover-Elitist Evolutionary Algorithm performed over a High Performing Analytical Wake Model

    NASA Astrophysics Data System (ADS)

    Kirchner-Bossi, Nicolas; Porté-Agel, Fernando

    2017-04-01

    Wind turbine wakes can significantly disrupt the performance of further downstream turbines in a wind farm, thus seriously limiting the overall wind farm power output. Such effect makes the layout design of a wind farm to play a crucial role on the whole performance of the project. An accurate definition of the wake interactions added to a computationally compromised layout optimization strategy can result in an efficient resource when addressing the problem. This work presents a novel soft-computing approach to optimize the wind farm layout by minimizing the overall wake effects that the installed turbines exert on one another. An evolutionary algorithm with an elitist sub-optimization crossover routine and an unconstrained (continuous) turbine positioning set up is developed and tested over an 80-turbine offshore wind farm over the North Sea off Denmark (Horns Rev I). Within every generation of the evolution, the wind power output (cost function) is computed through a recently developed and validated analytical wake model with a Gaussian profile velocity deficit [1], which has shown to outperform the traditionally employed wake models through different LES simulations and wind tunnel experiments. Two schemes with slightly different perimeter constraint conditions (full or partial) are tested. Results show, compared to the baseline, gridded layout, a wind power output increase between 5.5% and 7.7%. In addition, it is observed that the electric cable length at the facilities is reduced by up to 21%. [1] Bastankhah, Majid, and Fernando Porté-Agel. "A new analytical model for wind-turbine wakes." Renewable Energy 70 (2014): 116-123.

  20. Solving Fuzzy Fractional Differential Equations Using Zadeh's Extension Principle

    PubMed Central

    Ahmad, M. Z.; Hasan, M. K.; Abbasbandy, S.

    2013-01-01

    We study a fuzzy fractional differential equation (FFDE) and present its solution using Zadeh's extension principle. The proposed study extends the case of fuzzy differential equations of integer order. We also propose a numerical method to approximate the solution of FFDEs. To solve nonlinear problems, the proposed numerical method is then incorporated into an unconstrained optimisation technique. Several numerical examples are provided. PMID:24082853

  1. Structural Damage Detection Using Changes in Natural Frequencies: Theory and Applications

    NASA Astrophysics Data System (ADS)

    He, K.; Zhu, W. D.

    2011-07-01

    A vibration-based method that uses changes in natural frequencies of a structure to detect damage has advantages over conventional nondestructive tests in detecting various types of damage, including loosening of bolted joints, using minimum measurement data. Two major challenges associated with applications of the vibration-based damage detection method to engineering structures are addressed: accurate modeling of structures and the development of a robust inverse algorithm to detect damage, which are defined as the forward and inverse problems, respectively. To resolve the forward problem, new physics-based finite element modeling techniques are developed for fillets in thin-walled beams and for bolted joints, so that complex structures can be accurately modeled with a reasonable model size. To resolve the inverse problem, a logistical function transformation is introduced to convert the constrained optimization problem to an unconstrained one, and a robust iterative algorithm using a trust-region method, called the Levenberg-Marquardt method, is developed to accurately detect the locations and extent of damage. The new methodology can ensure global convergence of the iterative algorithm in solving under-determined system equations and deal with damage detection problems with relatively large modeling error and measurement noise. The vibration-based damage detection method is applied to various structures including lightning masts, a space frame structure and one of its components, and a pipeline. The exact locations and extent of damage can be detected in the numerical simulation where there is no modeling error and measurement noise. The locations and extent of damage can be successfully detected in experimental damage detection.

  2. Closed-form expressions for flip angle variation that maximize total signal in T1-weighted rapid gradient echo MRI.

    PubMed

    Drobnitzky, Matthias; Klose, Uwe

    2017-03-01

    Magnetization-prepared rapid gradient-echo (MPRAGE) sequences are commonly employed for T1-weighted structural brain imaging. Following a contrast preparation radiofrequency (RF) pulse, the data acquisition proceeds under nonequilibrium conditions of the relaxing longitudinal magnetization. Variation of the flip angle can be used to maximize total available signal. Simulated annealing or greedy algorithms have so far been published to numerically solve this problem, with signal-to-noise ratios optimized for clinical imaging scenarios by adhering to a predefined shape of the signal evolution. We propose an unconstrained optimization of the MPRAGE experiment that employs techniques from resource allocation theory. A new dynamic programming solution is introduced that yields closed-form expressions for optimal flip angle variation. Flip angle series are proposed that maximize total transverse magnetization (Mxy) for a range of physiologic T1 values. A 3D MPRAGE sequence is modified to allow for a controlled variation of the excitation angle. Experiments employing a T1 contrast phantom are performed at 3T. 1D acquisitions without phase encoding permit measurement of the temporal development of Mxy. Image mean signal and standard deviation for reference flip angle trains are compared in 2D measurements. Signal profiles at sharp phantom edges are acquired to access image blurring related to nonuniform Mxy development. A novel closed-form expression for flip angle variation is found that constitutes the optimal policy to reach maximum total signal. It numerically equals previously published results of other authors when evaluated under their simplifying assumptions. Longitudinal magnetization (Mz) is exhaustively used without causing abrupt changes in the measured MR signal, which is a prerequisite for artifact free images. Phantom experiments at 3T verify the expected benefit for total accumulated k-space signal when compared with published flip angle series. Describing the MR signal collection in MPRAGE sequences as a Bellman problem is a new concept. By means of recursively solving a series of overlapping subproblems, this leads to an elegant solution for the problem of maximizing total available MR signal in k-space. A closed-form expression for flip angle variation avoids the complexity of numerical optimization and eases access to controlled variation in an attempt to identify potential clinical applications. © 2017 American Association of Physicists in Medicine.

  3. A constraint-based evolutionary learning approach to the expectation maximization for optimal estimation of the hidden Markov model for speech signal modeling.

    PubMed

    Huda, Shamsul; Yearwood, John; Togneri, Roberto

    2009-02-01

    This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).

  4. Digital Image Restoration Under a Regression Model - The Unconstrained, Linear Equality and Inequality Constrained Approaches

    DTIC Science & Technology

    1974-01-01

    REGRESSION MODEL - THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January 1974 Nelson Delfino d’Avila Mascarenha;? Image...Report 520 DIGITAL IMAGE RESTORATION UNDER A REGRESSION MODEL THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January...a two- dimensional form adequately describes the linear model . A dis- cretization is performed by using quadrature methods. By trans

  5. Optimum structural design with static aeroelastic constraints

    NASA Technical Reports Server (NTRS)

    Bowman, Keith B; Grandhi, Ramana V.; Eastep, F. E.

    1989-01-01

    The static aeroelastic performance characteristics, divergence velocity, control effectiveness and lift effectiveness are considered in obtaining an optimum weight structure. A typical swept wing structure is used with upper and lower skins, spar and rib thicknesses, and spar cap and vertical post cross-sectional areas as the design parameters. Incompressible aerodynamic strip theory is used to derive the constraint formulations, and aerodynamic load matrices. A Sequential Unconstrained Minimization Technique (SUMT) algorithm is used to optimize the wing structure to meet the desired performance constraints.

  6. Optimal Aerodynamic Design of Conventional and Coaxial Helicopter Rotors in Hover and Forward Flight

    DTIC Science & Technology

    2015-12-28

    graduate career a fun and (at times) productive pursuit. I owe a great deal to my parents , Kevin and Lisa, for their unconditional support. Finally...forward flight. Orchard and Newman [6] investigated fundamental design features of compound helicopters using a wing, a single rotor, and a propul- sor... style compound. For the case considered here, the coaxial rotors are unconstrained in lift offset. If a wing were used in a case that also included a lift

  7. Proceedings of the Quantum Computation for Physical Modeling Workshop 2004. Held in North Falmouth, MA on 12-15 September 2004

    DTIC Science & Technology

    2005-10-01

    late the difficulty of some basic 1-bit and n-bit quantum and classical operations in an simple unconstrained scenario. KEY WORDS: Time evolution... quantum circuit and design are presented for an optimized entangling probe attacking the BB84 Protocol of quantum key distribution (QKD) and yielding...unambiguous, at least some of the time. It follows that the BB84 (Bennett-Brassard 1984) proto- col of quantum key distribution has a vulnerability similar to

  8. Simple wavefront correction framework for two-photon microscopy of in-vivo brain

    PubMed Central

    Galwaduge, P. T.; Kim, S. H.; Grosberg, L. E.; Hillman, E. M. C.

    2015-01-01

    We present an easily implemented wavefront correction scheme that has been specifically designed for in-vivo brain imaging. The system can be implemented with a single liquid crystal spatial light modulator (LCSLM), which makes it compatible with existing patterned illumination setups, and provides measurable signal improvements even after a few seconds of optimization. The optimization scheme is signal-based and does not require exogenous guide-stars, repeated image acquisition or beam constraint. The unconstrained beam approach allows the use of Zernike functions for aberration correction and Hadamard functions for scattering correction. Low order corrections performed in mouse brain were found to be valid up to hundreds of microns away from the correction location. PMID:26309763

  9. Constrained Laboratory vs. Unconstrained Steering-Induced Rollover Crash Tests.

    PubMed

    Kerrigan, Jason R; Toczyski, Jacek; Roberts, Carolyn; Zhang, Qi; Clauser, Mark

    2015-01-01

    The goal of this study was to evaluate how well an in-laboratory rollover crash test methodology that constrains vehicle motion can reproduce the dynamics of unconstrained full-scale steering-induced rollover crash tests in sand. Data from previously-published unconstrained steering-induced rollover crash tests using a full-size pickup and mid-sized sedan were analyzed to determine vehicle-to-ground impact conditions and kinematic response of the vehicles throughout the tests. Then, a pair of replicate vehicles were prepared to match the inertial properties of the steering-induced test vehicles and configured to record dynamic roof structure deformations and kinematic response. Both vehicles experienced greater increases in roll-axis angular velocities in the unconstrained tests than in the constrained tests; however, the increases that occurred during the trailing side roof interaction were nearly identical between tests for both vehicles. Both vehicles experienced linear accelerations in the constrained tests that were similar to those in the unconstrained tests, but the pickup, in particular, had accelerations that were matched in magnitude, timing, and duration very closely between the two test types. Deformations in the truck test were higher in the constrained than the unconstrained, and deformations in the sedan were greater in the unconstrained than the constrained as a result of constraints of the test fixture, and differences in impact velocity for the trailing side. The results of the current study suggest that in-laboratory rollover tests can be used to simulate the injury-causing portions of unconstrained rollover crashes. To date, such a demonstration has not yet been published in the open literature. This study did, however, show that road surface can affect vehicle response in a way that may not be able to be mimicked in the laboratory. Lastly, this study showed that configuring the in-laboratory tests to match the leading-side touchdown conditions could result in differences in the trailing side impact conditions.

  10. A New Method for Unconstrained Heart Rate Monitoring

    DTIC Science & Technology

    2001-10-25

    members. However, care of bedridden elderly persons are not easy task, and this caused severe psychological and financial problems for other family...physical and mental conditions of bedridden elderly people at home and patients at hospitals and to contribute to the labor saving of the care and the...not suitable for home care of bedridden elderly people. Our method provides very small, simple and mechanically rugged device suitable for home

  11. Development of multidisciplinary design optimization procedures for smart composite wings and turbomachinery blades

    NASA Astrophysics Data System (ADS)

    Jha, Ratneshwar

    Multidisciplinary design optimization (MDO) procedures have been developed for smart composite wings and turbomachinery blades. The analysis and optimization methods used are computationally efficient and sufficiently rigorous. Therefore, the developed MDO procedures are well suited for actual design applications. The optimization procedure for the conceptual design of composite aircraft wings with surface bonded piezoelectric actuators involves the coupling of structural mechanics, aeroelasticity, aerodynamics and controls. The load carrying member of the wing is represented as a single-celled composite box beam. Each wall of the box beam is analyzed as a composite laminate using a refined higher-order displacement field to account for the variations in transverse shear stresses through the thickness. Therefore, the model is applicable for the analysis of composite wings of arbitrary thickness. Detailed structural modeling issues associated with piezoelectric actuation of composite structures are considered. The governing equations of motion are solved using the finite element method to analyze practical wing geometries. Three-dimensional aerodynamic computations are performed using a panel code based on the constant-pressure lifting surface method to obtain steady and unsteady forces. The Laplace domain method of aeroelastic analysis produces root-loci of the system which gives an insight into the physical phenomena leading to flutter/divergence and can be efficiently integrated within an optimization procedure. The significance of the refined higher-order displacement field on the aeroelastic stability of composite wings has been established. The effect of composite ply orientations on flutter and divergence speeds has been studied. The Kreisselmeier-Steinhauser (K-S) function approach is used to efficiently integrate the objective functions and constraints into a single envelope function. The resulting unconstrained optimization problem is solved using the Broyden-Fletcher-Goldberg-Shanno algorithm. The optimization problem is formulated with the objective of simultaneously minimizing wing weight and maximizing its aerodynamic efficiency. Design variables include composite ply orientations, ply thicknesses, wing sweep, piezoelectric actuator thickness and actuator voltage. Constraints are placed on the flutter/divergence dynamic pressure, wing root stresses and the maximum electric field applied to the actuators. Numerical results are presented showing significant improvements, after optimization, compared to reference designs. The multidisciplinary optimization procedure for the design of turbomachinery blades integrates aerodynamic and heat transfer design objective criteria along with various mechanical and geometric constraints on the blade geometry. The airfoil shape is represented by Bezier-Bernstein polynomials, which results in a relatively small number of design variables for the optimization. Thin shear layer approximation of the Navier-Stokes equation is used for the viscous flow calculations. Grid generation is accomplished by solving Poisson equations. The maximum and average blade temperatures are obtained through a finite element analysis. Total pressure and exit kinetic energy losses are minimized, with constraints on blade temperatures and geometry. The constrained multiobjective optimization problem is solved using the K-S function approach. The results for the numerical example show significant improvements after optimization.

  12. Optimized Periocular Template Selection for Human Recognition

    PubMed Central

    Sa, Pankaj K.; Majhi, Banshidhar

    2013-01-01

    A novel approach for selecting a rectangular template around periocular region optimally potential for human recognition is proposed. A comparatively larger template of periocular image than the optimal one can be slightly more potent for recognition, but the larger template heavily slows down the biometric system by making feature extraction computationally intensive and increasing the database size. A smaller template, on the contrary, cannot yield desirable recognition though the smaller template performs faster due to low computation for feature extraction. These two contradictory objectives (namely, (a) to minimize the size of periocular template and (b) to maximize the recognition through the template) are aimed to be optimized through the proposed research. This paper proposes four different approaches for dynamic optimal template selection from periocular region. The proposed methods are tested on publicly available unconstrained UBIRISv2 and FERET databases and satisfactory results have been achieved. Thus obtained template can be used for recognition of individuals in an organization and can be generalized to recognize every citizen of a nation. PMID:23984370

  13. JWST Wavefront Control Toolbox

    NASA Technical Reports Server (NTRS)

    Shin, Shahram Ron; Aronstein, David L.

    2011-01-01

    A Matlab-based toolbox has been developed for the wavefront control and optimization of segmented optical surfaces to correct for possible misalignments of James Webb Space Telescope (JWST) using influence functions. The toolbox employs both iterative and non-iterative methods to converge to an optimal solution by minimizing the cost function. The toolbox could be used in either of constrained and unconstrained optimizations. The control process involves 1 to 7 degrees-of-freedom perturbations per segment of primary mirror in addition to the 5 degrees of freedom of secondary mirror. The toolbox consists of a series of Matlab/Simulink functions and modules, developed based on a "wrapper" approach, that handles the interface and data flow between existing commercial optical modeling software packages such as Zemax and Code V. The limitations of the algorithm are dictated by the constraints of the moving parts in the mirrors.

  14. Long-range interacting systems in the unconstrained ensemble.

    PubMed

    Latella, Ivan; Pérez-Madrid, Agustín; Campa, Alessandro; Casetti, Lapo; Ruffo, Stefano

    2017-01-01

    Completely open systems can exchange heat, work, and matter with the environment. While energy, volume, and number of particles fluctuate under completely open conditions, the equilibrium states of the system, if they exist, can be specified using the temperature, pressure, and chemical potential as control parameters. The unconstrained ensemble is the statistical ensemble describing completely open systems and the replica energy is the appropriate free energy for these control parameters from which the thermodynamics must be derived. It turns out that macroscopic systems with short-range interactions cannot attain equilibrium configurations in the unconstrained ensemble, since temperature, pressure, and chemical potential cannot be taken as a set of independent variables in this case. In contrast, we show that systems with long-range interactions can reach states of thermodynamic equilibrium in the unconstrained ensemble. To illustrate this fact, we consider a modification of the Thirring model and compare the unconstrained ensemble with the canonical and grand-canonical ones: The more the ensemble is constrained by fixing the volume or number of particles, the larger the space of parameters defining the equilibrium configurations.

  15. Interconnection-wide hour-ahead scheduling in the presence of intermittent renewables and demand response: A surplus maximizing approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behboodi, Sahand; Chassin, David P.; Djilali, Ned

    This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less

  16. Interconnection-wide hour-ahead scheduling in the presence of intermittent renewables and demand response: A surplus maximizing approach

    DOE PAGES

    Behboodi, Sahand; Chassin, David P.; Djilali, Ned; ...

    2016-12-23

    This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less

  17. Reliability of the Achilles tendon tap reflex evoked during stance using a pendulum hammer.

    PubMed

    Mildren, Robyn L; Zaback, Martin; Adkin, Allan L; Frank, James S; Bent, Leah R

    2016-01-01

    The tendon tap reflex (T-reflex) is often evoked in relaxed muscles to assess spinal reflex circuitry. Factors contributing to reflex excitability are modulated to accommodate specific postural demands. Thus, there is a need to be able to assess this reflex in a state where spinal reflex circuitry is engaged in maintaining posture. The aim of this study was to determine whether a pendulum hammer could provide controlled stimuli to the Achilles tendon and evoke reliable muscle responses during normal stance. A second aim was to establish appropriate stimulus parameters for experimental use. Fifteen healthy young adults stood on a forceplate while taps were applied to the Achilles tendon under conditions in which postural sway was constrained (by providing centre of pressure feedback) or unconstrained (no feedback) from an invariant release angle (50°). Twelve participants repeated this testing approximately six months later. Within one experimental session, tap force and T-reflex amplitude were found to be reliable regardless of whether postural sway was constrained (tap force ICC=0.982; T-reflex ICC=0.979) or unconstrained (tap force ICC=0.968; T-reflex ICC=0.964). T-reflex amplitude was also reliable between experimental sessions (constrained ICC=0.894; unconstrained ICC=0.890). When a T-reflex recruitment curve was constructed, optimal mid-range responses were observed using a 50° release angle. These results demonstrate that reliable Achilles T-reflexes can be evoked in standing participants without the need to constrain posture. The pendulum hammer provides a simple method to allow researchers and clinicians to gather information about reflex circuitry in a state where it is involved in postural control. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.

    2001-01-01

    We completed the formulation of the smoothness penalty functional this past quarter. We used a simplified procedure for estimating the statistics of the FCA solution spectral coefficients from the results of the unconstrained, low-truncation FCA (stopping criterion) solutions. During the current reporting period we have completed the calculation of GEOS-2 model-equivalent brightness temperatures for the 6.7 micron and 11 micron window channels used in the GOES imagery for all 10 cases from August 1999. These were simulated using the AER-developed Optimal Spectral Sampling (OSS) model.

  19. Single-Camera-Based Method for Step Length Symmetry Measurement in Unconstrained Elderly Home Monitoring.

    PubMed

    Cai, Xi; Han, Guang; Song, Xin; Wang, Jinkuan

    2017-11-01

    single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc. single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc.

  20. Supersonic civil airplane study and design: Performance and sonic boom

    NASA Technical Reports Server (NTRS)

    Cheung, Samson

    1995-01-01

    Since aircraft configuration plays an important role in aerodynamic performance and sonic boom shape, the configuration of the next generation supersonic civil transport has to be tailored to meet high aerodynamic performance and low sonic boom requirements. Computational fluid dynamics (CFD) can be used to design airplanes to meet these dual objectives. The work and results in this report are used to support NASA's High Speed Research Program (HSRP). CFD tools and techniques have been developed for general usages of sonic boom propagation study and aerodynamic design. Parallel to the research effort on sonic boom extrapolation, CFD flow solvers have been coupled with a numeric optimization tool to form a design package for aircraft configuration. This CFD optimization package has been applied to configuration design on a low-boom concept and an oblique all-wing concept. A nonlinear unconstrained optimizer for Parallel Virtual Machine has been developed for aerodynamic design and study.

  1. SEMIPARAMETRIC ZERO-INFLATED MODELING IN MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA)

    PubMed Central

    Liu, Hai; Ma, Shuangge; Kronmal, Richard; Chan, Kung-Sik

    2013-01-01

    We analyze the Agatston score of coronary artery calcium (CAC) from the Multi-Ethnic Study of Atherosclerosis (MESA) using semi-parametric zero-inflated modeling approach, where the observed CAC scores from this cohort consist of high frequency of zeroes and continuously distributed positive values. Both partially constrained and unconstrained models are considered to investigate the underlying biological processes of CAC development from zero to positive, and from small amount to large amount. Different from existing studies, a model selection procedure based on likelihood cross-validation is adopted to identify the optimal model, which is justified by comparative Monte Carlo studies. A shrinkaged version of cubic regression spline is used for model estimation and variable selection simultaneously. When applying the proposed methods to the MESA data analysis, we show that the two biological mechanisms influencing the initiation of CAC and the magnitude of CAC when it is positive are better characterized by an unconstrained zero-inflated normal model. Our results are significantly different from those in published studies, and may provide further insights into the biological mechanisms underlying CAC development in human. This highly flexible statistical framework can be applied to zero-inflated data analyses in other areas. PMID:23805172

  2. A transformation method for constrained-function minimization

    NASA Technical Reports Server (NTRS)

    Park, S. K.

    1975-01-01

    A direct method for constrained-function minimization is discussed. The method involves the construction of an appropriate function mapping all of one finite dimensional space onto the region defined by the constraints. Functions which produce such a transformation are constructed for a variety of constraint regions including, for example, those arising from linear and quadratic inequalities and equalities. In addition, the computational performance of this method is studied in the situation where the Davidon-Fletcher-Powell algorithm is used to solve the resulting unconstrained problem. Good performance is demonstrated for 19 test problems by achieving rapid convergence to a solution from several widely separated starting points.

  3. Application of real-time cooperative editing in urban planning management system

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Liu, Renyi; Liu, Nan; Bao, Weizheng

    2007-06-01

    With the increasing of business requirement of urban planning bureau, co-edit function is needed urgently, however conventional GIS are not support this. In order to overcome this limitation, a new kind urban 1planning management system with co-edit function is needed. Such a system called PM2006 has been used in Suzhou Urban Planning Bureau. PM2006 is introduced in this paper. In this paper, four main issues of Co-edit system--consistency, responsiveness time, data recoverability and unconstrained operation--were discussed. And for these four questions, resolutions were put forward in paper. To resolve these problems of co-edit GIS system, a data model called FGDB (File and ESRI GeoDatabase) that is mixture architecture of File and ESRI Geodatabase was introduced here. The main components of FGDB data model are ESRI versioned Geodatabase and replicated architecture. With FGDB, client responsiveness, spatial data recoverability and unconstrained operation were overcome. In last of paper, MapServer, the co-edit map server module, is presented. Main functions of MapServer are operation serialization and spatial data replication between file and versioned data.

  4. Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review

    PubMed Central

    Miao, Yinglong; McCammon, J. Andrew

    2016-01-01

    Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations. PMID:27453631

  5. Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review.

    PubMed

    Miao, Yinglong; McCammon, J Andrew

    Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations.

  6. Constrained Multipoint Aerodynamic Shape Optimization Using an Adjoint Formulation and Parallel Computers

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony; Alonso, Juan Jose; Rimlinger, Mark J.; Saunders, David

    1997-01-01

    An aerodynamic shape optimization method that treats the design of complex aircraft configurations subject to high fidelity computational fluid dynamics (CFD), geometric constraints and multiple design points is described. The design process will be greatly accelerated through the use of both control theory and distributed memory computer architectures. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on a higher order CFD method. In order to facilitate the integration of these high fidelity CFD approaches into future multi-disciplinary optimization (NW) applications, new methods must be developed which are capable of simultaneously addressing complex geometries, multiple objective functions, and geometric design constraints. In our earlier studies, we coupled the adjoint based design formulations with unconstrained optimization algorithms and showed that the approach was effective for the aerodynamic design of airfoils, wings, wing-bodies, and complex aircraft configurations. In many of the results presented in these earlier works, geometric constraints were satisfied either by a projection into feasible space or by posing the design space parameterization such that it automatically satisfied constraints. Furthermore, with the exception of reference 9 where the second author initially explored the use of multipoint design in conjunction with adjoint formulations, our earlier works have focused on single point design efforts. Here we demonstrate that the same methodology may be extended to treat complete configuration designs subject to multiple design points and geometric constraints. Examples are presented for both transonic and supersonic configurations ranging from wing alone designs to complex configuration designs involving wing, fuselage, nacelles and pylons.

  7. Optimal Battery Utilization Over Lifetime for Parallel Hybrid Electric Vehicle to Maximize Fuel Economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patil, Chinmaya; Naghshtabrizi, Payam; Verma, Rajeev

    This paper presents a control strategy to maximize fuel economy of a parallel hybrid electric vehicle over a target life of the battery. Many approaches to maximizing fuel economy of parallel hybrid electric vehicle do not consider the effect of control strategy on the life of the battery. This leads to an oversized and underutilized battery. There is a trade-off between how aggressively to use and 'consume' the battery versus to use the engine and consume fuel. The proposed approach addresses this trade-off by exploiting the differences in the fast dynamics of vehicle power management and slow dynamics of batterymore » aging. The control strategy is separated into two parts, (1) Predictive Battery Management (PBM), and (2) Predictive Power Management (PPM). PBM is the higher level control with slow update rate, e.g. once per month, responsible for generating optimal set points for PPM. The considered set points in this paper are the battery power limits and State Of Charge (SOC). The problem of finding the optimal set points over the target battery life that minimize engine fuel consumption is solved using dynamic programming. PPM is the lower level control with high update rate, e.g. a second, responsible for generating the optimal HEV energy management controls and is implemented using model predictive control approach. The PPM objective is to find the engine and battery power commands to achieve the best fuel economy given the battery power and SOC constraints imposed by PBM. Simulation results with a medium duty commercial hybrid electric vehicle and the proposed two-level hierarchical control strategy show that the HEV fuel economy is maximized while meeting a specified target battery life. On the other hand, the optimal unconstrained control strategy achieves marginally higher fuel economy, but fails to meet the target battery life.« less

  8. A point cloud modeling method based on geometric constraints mixing the robust least squares method

    NASA Astrophysics Data System (ADS)

    Yue, JIanping; Pan, Yi; Yue, Shun; Liu, Dapeng; Liu, Bin; Huang, Nan

    2016-10-01

    The appearance of 3D laser scanning technology has provided a new method for the acquisition of spatial 3D information. It has been widely used in the field of Surveying and Mapping Engineering with the characteristics of automatic and high precision. 3D laser scanning data processing process mainly includes the external laser data acquisition, the internal industry laser data splicing, the late 3D modeling and data integration system. For the point cloud modeling, domestic and foreign researchers have done a lot of research. Surface reconstruction technology mainly include the point shape, the triangle model, the triangle Bezier surface model, the rectangular surface model and so on, and the neural network and the Alfa shape are also used in the curved surface reconstruction. But in these methods, it is often focused on single surface fitting, automatic or manual block fitting, which ignores the model's integrity. It leads to a serious problems in the model after stitching, that is, the surfaces fitting separately is often not satisfied with the well-known geometric constraints, such as parallel, vertical, a fixed angle, or a fixed distance. However, the research on the special modeling theory such as the dimension constraint and the position constraint is not used widely. One of the traditional modeling methods adding geometric constraints is a method combing the penalty function method and the Levenberg-Marquardt algorithm (L-M algorithm), whose stability is pretty good. But in the research process, it is found that the method is greatly influenced by the initial value. In this paper, we propose an improved method of point cloud model taking into account the geometric constraint. We first apply robust least-squares to enhance the initial value's accuracy, and then use penalty function method to transform constrained optimization problems into unconstrained optimization problems, and finally solve the problems using the L-M algorithm. The experimental results show that the internal accuracy is improved, and it is shown that the improved method for point clouds modeling proposed by this paper outperforms the traditional point clouds modeling methods.

  9. Improved belief propagation algorithm finds many Bethe states in the random-field Ising model on random graphs

    NASA Astrophysics Data System (ADS)

    Perugini, G.; Ricci-Tersenghi, F.

    2018-01-01

    We first present an empirical study of the Belief Propagation (BP) algorithm, when run on the random field Ising model defined on random regular graphs in the zero temperature limit. We introduce the notion of extremal solutions for the BP equations, and we use them to fix a fraction of spins in their ground state configuration. At the phase transition point the fraction of unconstrained spins percolates and their number diverges with the system size. This in turn makes the associated optimization problem highly non trivial in the critical region. Using the bounds on the BP messages provided by the extremal solutions we design a new and very easy to implement BP scheme which is able to output a large number of stable fixed points. On one hand this new algorithm is able to provide the minimum energy configuration with high probability in a competitive time. On the other hand we found that the number of fixed points of the BP algorithm grows with the system size in the critical region. This unexpected feature poses new relevant questions about the physics of this class of models.

  10. Challenges of ambulatory physiological sensing.

    PubMed

    Healey, Jennifer

    2004-01-01

    Applications for ambulatory monitoring span the spectrum from fitness optimization to cardiac defibrillation. This range of applications is associated with a corresponding range of required detection accuracies and a range of inconvenience and discomfort that wearers are willing to tolerate. This paper describes a selection of physiological sensors and how they might best be worn in the unconstrained ambulatory environment to provide the most robust measurements and the greatest comfort to the wearer. Using wireless mobile computing devices, it will be possible to record, analyze and respond to changes in the wearers' physiological signals in real time using these sensors.

  11. The cost of noise reduction for departure and arrival operations of commercial tilt rotor aircraft

    NASA Technical Reports Server (NTRS)

    Faulkner, H. B.; Swan, W. M.

    1976-01-01

    The relationship between direct operating cost (DOC) and noise annoyance due to a departure and an arrival operation was developed for commercial tilt rotor aircraft. This was accomplished by generating a series of tilt rotor aircraft designs to meet various noise goals at minimum DOC. These vehicles ranged across the spectrum of possible noise levels from completely unconstrained to the quietest vehicles that could be designed within the study ground rules. Optimization parameters were varied to find the minimum DOC. This basic variation was then extended to different aircraft sizes and technology time frames.

  12. σ-SCF: A direct energy-targeting method to mean-field excited states

    NASA Astrophysics Data System (ADS)

    Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D.; Van Voorhis, Troy

    2017-12-01

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry—a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states—ground or excited—are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.

  13. σ-SCF: A direct energy-targeting method to mean-field excited states.

    PubMed

    Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D; Van Voorhis, Troy

    2017-12-07

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry-a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states-ground or excited-are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H 2 , HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.

  14. Influence of Serotonin Transporter SLC6A4 Genotype on the Effect of Psychosocial Stress on Cognitive Performance: An Exploratory Pilot Study.

    PubMed

    Beversdorf, David Q; Carpenter, Allen L; Alexander, Jessica K; Jenkins, Neil T; Tilley, Michael R; White, Catherine A; Hillier, Ashleigh J; Smith, Ryan M; Gu, Howard H

    2018-06-01

    Previous research has shown an effect of various psychosocial stressors on unconstrained cognitive flexibility, such as searching through a large set of potential solutions in the lexical-semantic network during verbal problem-solving. Functional magnetic resonance imaging has shown that the presence of the short (S) allele (lacking a 43-base pair repeat) of the promoter region of the gene (SLC6A4) encoding the serotonin transporter (5-HTT) protein is associated with a greater amygdalar response to emotional stimuli and a greater response to stressors. Therefore, we hypothesized that the presence of the S-allele is associated with greater stress-associated impairment in performance on an unconstrained cognitive flexibility task, anagrams. In this exploratory pilot study, 28 healthy young adults were genotyped for long (L)-allele versus S-allele promoter region polymorphism of the 5-HTT gene, SLC6A4. Participants solved anagrams during the Trier Social Stress Test, which included public speaking and mental arithmetic stressors. We compared the participants' cognitive response to stress across genotypes. A Gene×Stress interaction effect was observed in this small sample. Comparisons revealed that participants with at least one S-allele performed worse during the Stress condition. Genetic susceptibility to stress conferred by SLC6A4 appeared to modulate unconstrained cognitive flexibility during psychosocial stress in this exploratory sample. If confirmed, this finding may have implications for conditions associated with increased stress response, including performance anxiety and cocaine withdrawal. Future work is needed both to confirm our findings with a larger sample and to explore the mechanisms of this proposed effect.

  15. Evaluation of Advanced Thermal Protection Techniques for Future Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Olds, John R.; Cowart, Kris

    2001-01-01

    A method for integrating Aeroheating analysis into conceptual reusable launch vehicle RLV design is presented in this thesis. This process allows for faster turn-around time to converge a RLV design through the advent of designing an optimized thermal protection system (TPS). It consists of the coupling and automation of four computer software packages: MINIVER, TPSX, TCAT and ADS. MINIVER is an Aeroheating code that produces centerline radiation equilibrium temperatures, convective heating rates, and heat loads over simplified vehicle geometries. These include flat plates and swept cylinders that model wings and leading edges, respectively. TPSX is a NASA Ames material properties database that is available on the World Wide Web. The newly developed Thermal Calculation Analysis Tool (TCAT) uses finite difference methods to carry out a transient in-depth I-D conduction analysis over the center mold line of the vehicle. This is used along with the Automated Design Synthesis (ADS) code to correctly size the vehicle's thermal protection system JPS). The numerical optimizer ADS uses algorithms that solve constrained and unconstrained design problems. The resulting outputs for this process are TPS material types, unit thicknesses, and acreage percentages. TCAT was developed for several purposes. First, it provides a means to calculate the transient in-depth conduction seen by the surface of the TPS material that protects a vehicle during ascent and reentry. Along with the in-depth conduction, radiation from the surface of the material is calculated along with the temperatures at the backface and interior parts of the TPS material. Secondly, TCAT contributes added speed and automation to the overall design process. Another motivation in the development of TCAT is optimization.

  16. Projections onto the Pareto surface in multicriteria radiation therapy optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bokrantz, Rasmus, E-mail: bokrantz@kth.se, E-mail: rasmus.bokrantz@raysearchlabs.com; Miettinen, Kaisa

    2015-10-15

    Purpose: To eliminate or reduce the error to Pareto optimality that arises in Pareto surface navigation when the Pareto surface is approximated by a small number of plans. Methods: The authors propose to project the navigated plan onto the Pareto surface as a postprocessing step to the navigation. The projection attempts to find a Pareto optimal plan that is at least as good as or better than the initial navigated plan with respect to all objective functions. An augmented form of projection is also suggested where dose–volume histogram constraints are used to prevent that the projection causes a violation ofmore » some clinical goal. The projections were evaluated with respect to planning for intensity modulated radiation therapy delivered by step-and-shoot and sliding window and spot-scanned intensity modulated proton therapy. Retrospective plans were generated for a prostate and a head and neck case. Results: The projections led to improved dose conformity and better sparing of organs at risk (OARs) for all three delivery techniques and both patient cases. The mean dose to OARs decreased by 3.1 Gy on average for the unconstrained form of the projection and by 2.0 Gy on average when dose–volume histogram constraints were used. No consistent improvements in target homogeneity were observed. Conclusions: There are situations when Pareto navigation leaves room for improvement in OAR sparing and dose conformity, for example, if the approximation of the Pareto surface is coarse or the problem formulation has too permissive constraints. A projection onto the Pareto surface can identify an inaccurate Pareto surface representation and, if necessary, improve the quality of the navigated plan.« less

  17. Low-dose cone-beam CT via raw counts domain low-signal correction schemes: Performance assessment and task-based parameter optimization (Part II. Task-based parameter optimization).

    PubMed

    Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong

    2018-05-01

    Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region and an anterior ROI, located further from the noise streaks region. Optimal results derived from the task-based detectability index metric were compared to other operating points in the parameter space with different noise and spatial resolution trade-offs. The optimal operating points determined through the d' metric depended on the interplay between the major spatial frequency components of each imaging task and the highly shift-variant and anisotropic noise and spatial resolution properties associated with each operating point in the LSC parameter space. This interplay influenced imaging performance the most when the major spatial frequency component of a given imaging task coincided with the direction of spatial resolution loss or with the dominant noise spatial frequency component; this was the case of imaging task II. The performance of imaging tasks I and III was influenced by this interplay in a smaller scale than imaging task II, since the major frequency component of task I was perpendicular to imaging task II, and because imaging task III did not have strong directional dependence. For both LSC methods, there was a strong dependence of the overall d' magnitude and shape of the contours on the spatial location within the phantom, particularly for imaging tasks II and III. The d' value obtained at the optimal operating point for each spatial location and imaging task was similar when comparing the LSC methods studied in this work. A local task-based detectability framework to optimize the selection of parameters for LSC methods was developed. The framework takes into account the potential shift-variant and anisotropic spatial resolution and noise properties to maximize the imaging performance of the CT system. Optimal parameters for a given LSC method depend strongly on the spatial location within the image object. © 2018 American Association of Physicists in Medicine.

  18. Improvements in GRACE Gravity Field Determination through Stochastic Observation Modeling

    NASA Astrophysics Data System (ADS)

    McCullough, C.; Bettadpur, S. V.

    2016-12-01

    Current unconstrained Release 05 GRACE gravity field solutions from the Center for Space Research (CSR RL05) assume random observation errors following an independent multivariate Gaussian distribution. This modeling of observations, a simplifying assumption, fails to account for long period, correlated errors arising from inadequacies in the background force models. Fully modeling the errors inherent in the observation equations, through the use of a full observation covariance (modeling colored noise), enables optimal combination of GPS and inter-satellite range-rate data and obviates the need for estimating kinematic empirical parameters during the solution process. Most importantly, fully modeling the observation errors drastically improves formal error estimates of the spherical harmonic coefficients, potentially enabling improved uncertainty quantification of scientific results derived from GRACE and optimizing combinations of GRACE with independent data sets and a priori constraints.

  19. Rear wheel torque vectoring model predictive control with velocity regulation for electric vehicles

    NASA Astrophysics Data System (ADS)

    Siampis, Efstathios; Velenis, Efstathios; Longo, Stefano

    2015-11-01

    In this paper we propose a constrained optimal control architecture for combined velocity, yaw and sideslip regulation for stabilisation of the vehicle near the limit of lateral acceleration using the rear axle electric torque vectoring configuration of an electric vehicle. A nonlinear vehicle and tyre model are used to find reference steady-state cornering conditions and design two model predictive control (MPC) strategies of different levels of fidelity: one that uses a linearised version of the full vehicle model with the rear wheels' torques as the input, and another one that neglects the wheel dynamics and uses the rear wheels' slips as the input instead. After analysing the relative trade-offs between performance and computational effort, we compare the two MPC strategies against each other and against an unconstrained optimal control strategy in Simulink and Carsim environment.

  20. A GA based penalty function technique for solving constrained redundancy allocation problem of series system with interval valued reliability of components

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Bhunia, A. K.; Roy, D.

    2009-10-01

    In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.

  1. Exploratory power of the harmony search algorithm: analysis and improvements for global numerical optimization.

    PubMed

    Das, Swagatam; Mukhopadhyay, Arpan; Roy, Anwit; Abraham, Ajith; Panigrahi, Bijaya K

    2011-02-01

    The theoretical analysis of evolutionary algorithms is believed to be very important for understanding their internal search mechanism and thus to develop more efficient algorithms. This paper presents a simple mathematical analysis of the explorative search behavior of a recently developed metaheuristic algorithm called harmony search (HS). HS is a derivative-free real parameter optimization algorithm, and it draws inspiration from the musical improvisation process of searching for a perfect state of harmony. This paper analyzes the evolution of the population-variance over successive generations in HS and thereby draws some important conclusions regarding the explorative power of HS. A simple but very useful modification to the classical HS has been proposed in light of the mathematical analysis undertaken here. A comparison with the most recently published variants of HS and four other state-of-the-art optimization algorithms over 15 unconstrained and five constrained benchmark functions reflects the efficiency of the modified HS in terms of final accuracy, convergence speed, and robustness.

  2. Multibody dynamical modeling for spacecraft docking process with spring-damper buffering device: A new validation approach

    NASA Astrophysics Data System (ADS)

    Daneshjou, Kamran; Alibakhshi, Reza

    2018-01-01

    In the current manuscript, the process of spacecraft docking, as one of the main risky operations in an on-orbit servicing mission, is modeled based on unconstrained multibody dynamics. The spring-damper buffering device is utilized here in the docking probe-cone system for micro-satellites. Owing to the impact occurs inevitably during docking process and the motion characteristics of multibody systems are remarkably affected by this phenomenon, a continuous contact force model needs to be considered. Spring-damper buffering device, keeping the spacecraft stable in an orbit when impact occurs, connects a base (cylinder) inserted in the chaser satellite and the end of docking probe. Furthermore, by considering a revolute joint equipped with torsional shock absorber, between base and chaser satellite, the docking probe can experience both translational and rotational motions simultaneously. Although spacecraft docking process accompanied by the buffering mechanisms may be modeled by constrained multibody dynamics, this paper deals with a simple and efficient formulation to eliminate the surplus generalized coordinates and solve the impact docking problem based on unconstrained Lagrangian mechanics. By an example problem, first, model verification is accomplished by comparing the computed results with those recently reported in the literature. Second, according to a new alternative validation approach, which is based on constrained multibody problem, the accuracy of presented model can be also evaluated. This proposed verification approach can be applied to indirectly solve the constrained multibody problems by minimum required effort. The time history of impact force, the influence of system flexibility and physical interaction between shock absorber and penetration depth caused by impact are the issues followed in this paper. Third, the MATLAB/SIMULINK multibody dynamic analysis software will be applied to build impact docking model to validate computed results and then, investigate the trajectories of both satellites to take place the successful capture process.

  3. Color object detection using spatial-color joint probability functions.

    PubMed

    Luo, Jiebo; Crandall, David

    2006-06-01

    Object detection in unconstrained images is an important image understanding problem with many potential applications. There has been little success in creating a single algorithm that can detect arbitrary objects in unconstrained images; instead, algorithms typically must be customized for each specific object. Consequently, it typically requires a large number of exemplars (for rigid objects) or a large amount of human intuition (for nonrigid objects) to develop a robust algorithm. We present a robust algorithm designed to detect a class of compound color objects given a single model image. A compound color object is defined as having a set of multiple, particular colors arranged spatially in a particular way, including flags, logos, cartoon characters, people in uniforms, etc. Our approach is based on a particular type of spatial-color joint probability function called the color edge co-occurrence histogram. In addition, our algorithm employs perceptual color naming to handle color variation, and prescreening to limit the search scope (i.e., size and location) for the object. Experimental results demonstrated that the proposed algorithm is insensitive to object rotation, scaling, partial occlusion, and folding, outperforming a closely related algorithm based on color co-occurrence histograms by a decisive margin.

  4. Impulsive time-free transfers between halo orbits

    NASA Astrophysics Data System (ADS)

    Hiday, L. A.; Howell, K. C.

    1992-08-01

    A methodology is developed to design optimal time-free impulsive transfers between three-dimensional halo orbits in the vicinity of the interior L1 libration point of the sun-earth/moon barycenter system. The transfer trajectories are optimal in the sense that the total characteristics velocity required to implement the transfer exhibits a local minimum. Criteria are established whereby the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The optimality of a reference two-impulse transfer can be determined by examining the slope at the endpoints of a plot of the magnitude of the primer vector on the reference trajectory. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The optimal time of flight on the time-free transfer, and consequently, the departure and arrival locations on the halo orbits are determined by the unconstrained minimization of a function of two variables using a multivariable search technique. Results indicate that the cost can be substantially diminished by the allowance for coasts in the initial and final libration-point orbits.

  5. Impulsive Time-Free Transfers Between Halo Orbits

    NASA Astrophysics Data System (ADS)

    Hiday-Johnston, L. A.; Howell, K. C.

    1996-12-01

    A methodology is developed to design optimal time-free impulsive transfers between three-dimensional halo orbits in the vicinity of the interior L 1 libration point of the Sun-Earth/Moon barycenter system. The transfer trajectories are optimal in the sense that the total characteristic velocity required to implement the transfer exhibits a local minimum. Criteria are established whereby the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The optimality of a reference two-impulse transfer can be determined by examining the slope at the endpoints of a plot of the magnitude of the primer vector on the reference trajectory. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The optimal time of flight on the time-free transfer, and consequently, the departure and arrival locations on the halo orbits are determined by the unconstrained minimization of a function of two variables using a multivariable search technique. Results indicate that the cost can be substantially diminished by the allowance for coasts in the initial and final libration-point orbits.

  6. Optimal surveillance strategy for invasive species management when surveys stop after detection.

    PubMed

    Guillera-Arroita, Gurutzeta; Hauser, Cindy E; McCarthy, Michael A

    2014-05-01

    Invasive species are a cause for concern in natural and economic systems and require both monitoring and management. There is a trade-off between the amount of resources spent on surveying for the species and conducting early management of occupied sites, and the resources that are ultimately spent in delayed management at sites where the species was present but undetected. Previous work addressed this optimal resource allocation problem assuming that surveys continue despite detection until the initially planned survey effort is consumed. However, a more realistic scenario is often that surveys stop after detection (i.e., follow a "removal" sampling design) and then management begins. Such an approach will indicate a different optimal survey design and can be expected to be more efficient. We analyze this case and compare the expected efficiency of invasive species management programs under both survey methods. We also evaluate the impact of mis-specifying the type of sampling approach during the program design phase. We derive analytical expressions that optimize resource allocation between monitoring and management in surveillance programs when surveys stop after detection. We do this under a scenario of unconstrained resources and scenarios where survey budget is constrained. The efficiency of surveillance programs is greater if a "removal survey" design is used, with larger gains obtained when savings from early detection are high, occupancy is high, and survey costs are not much lower than early management costs at a site. Designing a surveillance program disregarding that surveys stop after detection can result in an efficiency loss. Our results help guide the design of future surveillance programs for invasive species. Addressing program design within a decision-theoretic framework can lead to a better use of available resources. We show how species prevalence, its detectability, and the benefits derived from early detection can be considered.

  7. Global Convergence of the EM Algorithm for Unconstrained Latent Variable Models with Categorical Indicators

    ERIC Educational Resources Information Center

    Weissman, Alexander

    2013-01-01

    Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…

  8. Constrained and Unconstrained Partial Adjacent Category Logit Models for Ordinal Response Variables

    ERIC Educational Resources Information Center

    Fullerton, Andrew S.; Xu, Jun

    2018-01-01

    Adjacent category logit models are ordered regression models that focus on comparisons of adjacent categories. These models are particularly useful for ordinal response variables with categories that are of substantive interest. In this article, we consider unconstrained and constrained versions of the partial adjacent category logit model, which…

  9. Unconstrained Structural Equation Models of Latent Interactions: Contrasting Residual- and Mean-Centered Approaches

    ERIC Educational Resources Information Center

    Marsh, Herbert W.; Wen, Zhonglin; Hau, Kit-Tai; Little, Todd D.; Bovaird, James A.; Widaman, Keith F.

    2007-01-01

    Little, Bovaird and Widaman (2006) proposed an unconstrained approach with residual centering for estimating latent interaction effects as an alternative to the mean-centered approach proposed by Marsh, Wen, and Hau (2004, 2006). Little et al. also differed from Marsh et al. in the number of indicators used to infer the latent interaction factor…

  10. Gender recognition from unconstrained and articulated human body.

    PubMed

    Wu, Qin; Guo, Guodong

    2014-01-01

    Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, how many body parts are needed to combine together, and what representations are good for articulated body-based gender recognition, is also presented. This paper also pursues data fusion schemes and efficient feature dimensionality reduction based on the partial least squares estimation. Extensive experiments are performed on two unconstrained databases which have not been explored before for gender recognition.

  11. Gender Recognition from Unconstrained and Articulated Human Body

    PubMed Central

    Wu, Qin; Guo, Guodong

    2014-01-01

    Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, how many body parts are needed to combine together, and what representations are good for articulated body-based gender recognition, is also presented. This paper also pursues data fusion schemes and efficient feature dimensionality reduction based on the partial least squares estimation. Extensive experiments are performed on two unconstrained databases which have not been explored before for gender recognition. PMID:24977203

  12. Using lod scores to detect sex differences in male-female recombination fractions.

    PubMed

    Feenstra, B; Greenberg, D A; Hodge, S E

    2004-01-01

    Human recombination fraction (RF) can differ between males and females, but investigators do not always know which disease genes are located in genomic areas of large RF sex differences. Knowledge of RF sex differences contributes to our understanding of basic biology and can increase the power of a linkage study, improve gene localization, and provide clues to possible imprinting. One way to detect these differences is to use lod scores. In this study we focused on detecting RF sex differences and answered the following questions, in both phase-known and phase-unknown matings: (1) How large a sample size is needed to detect a RF sex difference? (2) What are "optimal" proportions of paternally vs. maternally informative matings? (3) Does ascertaining nonoptimal proportions of paternally or maternally informative matings lead to ascertainment bias? Our results were as follows: (1) We calculated expected lod scores (ELODs) under two different conditions: "unconstrained," allowing sex-specific RF parameters (theta(female), theta(male)); and "constrained," requiring theta(female) = theta(male). We then examined the DeltaELOD (identical with difference between maximized constrained and unconstrained ELODs) and calculated minimum sample sizes required to achieve statistically significant DeltaELODs. For large RF sex differences, samples as small as 10 to 20 fully informative matings can achieve statistical significance. We give general sample size guidelines for detecting RF differences in informative phase-known and phase-unknown matings. (2) We defined p as the proportion of paternally informative matings in the dataset; and the optimal proportion p(circ) as that value of p that maximizes DeltaELOD. We determined that, surprisingly, p(circ) does not necessarily equal (1/2), although it does fall between approximately 0.4 and 0.6 in most situations. (3) We showed that if p in a sample deviates from its optimal value, no bias is introduced (asymptotically) to the maximum likelihood estimates of theta(female) and theta(male), even though ELOD is reduced (see point 2). This fact is important because often investigators cannot control the proportions of paternally and maternally informative families. In conclusion, it is possible to reliably detect sex differences in recombination fraction. Copyright 2004 S. Karger AG, Basel

  13. Optimal wavelets for biomedical signal compression.

    PubMed

    Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario

    2006-07-01

    Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.

  14. Multi-parameter geometrical scaledown study for energy optimization of MTJ and related spintronics nanodevices

    NASA Astrophysics Data System (ADS)

    Farhat, I. A. H.; Alpha, C.; Gale, E.; Atia, D. Y.; Stein, A.; Isakovic, A. F.

    The scaledown of magnetic tunnel junctions (MTJ) and related nanoscale spintronics devices poses unique challenges for energy optimization of their performance. We demonstrate the dependence of the switching current on the scaledown variable, while considering the influence of geometric parameters of MTJ, such as the free layer thickness, tfree, lateral size of the MTJ, w, and the anisotropy parameter of the MTJ. At the same time, we point out which values of the saturation magnetization, Ms, and anisotropy field, Hk, can lead to lowering the switching current and overall decrease of the energy needed to operate an MTJ. It is demonstrated that scaledown via decreasing the lateral size of the MTJ, while allowing some other parameters to be unconstrained, can improve energy performance by a measurable factor, shown to be the function of both geometric and physical parameters above. Given the complex interdependencies among both families of parameters, we developed a particle swarm optimization (PSO) algorithm that can simultaneously lower energy of operation and the switching current density. Results we obtained in scaledown study and via PSO optimization are compared to experimental results. Support by Mubadala-SRC 2012-VJ-2335 is acknowledged, as are staff at Cornell-CNF and BNL-CFN.

  15. Infrared and visible image fusion based on total variation and augmented Lagrangian.

    PubMed

    Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi

    2017-11-01

    This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.

  16. Multidigit force control during unconstrained grasping in response to object perturbations

    PubMed Central

    Haschke, Robert; Ritter, Helge; Santello, Marco; Ernst, Marc O.

    2017-01-01

    Because of the complex anatomy of the human hand, in the absence of external constraints, a large number of postures and force combinations can be used to attain a stable grasp. Motor synergies provide a viable strategy to solve this problem of motor redundancy. In this study, we exploited the technical advantages of an innovative sensorized object to study unconstrained hand grasping within the theoretical framework of motor synergies. Participants were required to grasp, lift, and hold the sensorized object. During the holding phase, we repetitively applied external disturbance forces and torques and recorded the spatiotemporal distribution of grip forces produced by each digit. We found that the time to reach the maximum grip force during each perturbation was roughly equal across fingers, consistent with a synchronous, synergistic stiffening across digits. We further evaluated this hypothesis by comparing the force distribution of human grasping vs. robotic grasping, where the control strategy was set by the experimenter. We controlled the global hand stiffness of the robotic hand and found that this control algorithm produced a force pattern qualitatively similar to human grasping performance. Our results suggest that the nervous system uses a default whole hand synergistic control to maintain a stable grasp regardless of the number of digits involved in the task, their position on the objects, and the type and frequency of external perturbations. NEW & NOTEWORTHY We studied hand grasping using a sensorized object allowing unconstrained finger placement. During object perturbation, the time to reach the peak force was roughly equal across fingers, consistently with a synergistic stiffening across fingers. Force distribution of a robotic grasping hand, where the control algorithm is based on global hand stiffness, was qualitatively similar to human grasping. This suggests that the central nervous system uses a default whole hand synergistic control to maintain a stable grasp. PMID:28228582

  17. Hybridisations of Variable Neighbourhood Search and Modified Simplex Elements to Harmony Search and Shuffled Frog Leaping Algorithms for Process Optimisations

    NASA Astrophysics Data System (ADS)

    Aungkulanon, P.; Luangpaiboon, P.

    2010-10-01

    Nowadays, the engineering problem systems are large and complicated. An effective finite sequence of instructions for solving these problems can be categorised into optimisation and meta-heuristic algorithms. Though the best decision variable levels from some sets of available alternatives cannot be done, meta-heuristics is an alternative for experience-based techniques that rapidly help in problem solving, learning and discovery in the hope of obtaining a more efficient or more robust procedure. All meta-heuristics provide auxiliary procedures in terms of their own tooled box functions. It has been shown that the effectiveness of all meta-heuristics depends almost exclusively on these auxiliary functions. In fact, the auxiliary procedure from one can be implemented into other meta-heuristics. Well-known meta-heuristics of harmony search (HSA) and shuffled frog-leaping algorithms (SFLA) are compared with their hybridisations. HSA is used to produce a near optimal solution under a consideration of the perfect state of harmony of the improvisation process of musicians. A meta-heuristic of the SFLA, based on a population, is a cooperative search metaphor inspired by natural memetics. It includes elements of local search and global information exchange. This study presents solution procedures via constrained and unconstrained problems with different natures of single and multi peak surfaces including a curved ridge surface. Both meta-heuristics are modified via variable neighbourhood search method (VNSM) philosophy including a modified simplex method (MSM). The basic idea is the change of neighbourhoods during searching for a better solution. The hybridisations proceed by a descent method to a local minimum exploring then, systematically or at random, increasingly distant neighbourhoods of this local solution. The results show that the variant of HSA with VNSM and MSM seems to be better in terms of the mean and variance of design points and yields.

  18. Development of free-piston Stirling engine performance and optimization codes based on Martini simulation technique

    NASA Technical Reports Server (NTRS)

    Martini, William R.

    1989-01-01

    A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.

  19. Finite elements based on consistently assumed stresses and displacements

    NASA Technical Reports Server (NTRS)

    Pian, T. H. H.

    1985-01-01

    Finite element stiffness matrices are derived using an extended Hellinger-Reissner principle in which internal displacements are added to serve as Lagrange multipliers to introduce the equilibrium constraint in each element. In a consistent formulation the assumed stresses are initially unconstrained and complete polynomials and the total displacements are also complete such that the corresponding strains are complete in the same order as the stresses. Several examples indicate that resulting properties for elements constructed by this consistent formulation are ideal and are less sensitive to distortions of element geometries. The method has been used to find the optimal stress terms for plane elements, 3-D solids, axisymmetric solids, and plate bending elements.

  20. Performance enhancement of fin attached ice-on-coil type thermal storage tank for different fin orientations using constrained and unconstrained simulations

    NASA Astrophysics Data System (ADS)

    Kim, M. H.; Duong, X. Q.; Chung, J. D.

    2017-03-01

    One of the drawbacks in latent thermal energy storage system is the slow charging and discharging time due to the low thermal conductivity of the phase change materials (PCM). This study numerically investigated the PCM melting process inside a finned tube to determine enhanced heat transfer performance. The influences of fin length and fin numbers were investigated. Also, two different fin orientations, a vertical and horizontal type, were examined, using two different simulation methods, constrained and unconstrained. The unconstrained simulation, which considers the density difference between the solid and liquid PCM showed approximately 40 % faster melting rate than that of constrained simulation. For a precise estimation of discharging performance, unconstrained simulation is essential. Thermal instability was found in the liquid layer below the solid PCM, which is contrary to the linear stability theory, due to the strong convection driven by heat flux from the coil wall. As the fin length increases, the area affected by the fin becomes larger, thus the discharging time becomes shorter. The discharging performance also increased as the fin number increased, but the enhancement of discharging performance by more than two fins was not discernible. The horizontal type shortened the complete melting time by approximately 10 % compared to the vertical type.

  1. 3D-2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Wang, Adam S.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Aygun, Nafi; Lo, Sheng-fu L.; Wolinsky, Jean-Paul; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2015-03-01

    An image-based 3D-2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely ‘LevelCheck’) to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior-anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of the surgical product) in a manner consistent with natural surgical workflow.

  2. Genetically engineered peptides for inorganics: study of an unconstrained bacterial display technology and bulk aluminum alloy.

    PubMed

    Adams, Bryn L; Finch, Amethist S; Hurley, Margaret M; Sarkes, Deborah A; Stratis-Cullum, Dimitra N

    2013-09-06

    The first-ever peptide biomaterial discovery using an unconstrained engineered bacterial display technology is reported. Using this approach, we have developed genetically engineered peptide binders for a bulk aluminum alloy and use molecular dynamics simulation of peptide conformational fluctuations to demonstrate sequence-dependent, structure-function relationships for metal and metal oxide interactions. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Content-based unconstrained color logo and trademark retrieval with color edge gradient co-occurrence histograms

    NASA Astrophysics Data System (ADS)

    Phan, Raymond; Androutsos, Dimitrios

    2008-01-01

    In this paper, we present a logo and trademark retrieval system for unconstrained color image databases that extends the Color Edge Co-occurrence Histogram (CECH) object detection scheme. We introduce more accurate information to the CECH, by virtue of incorporating color edge detection using vector order statistics. This produces a more accurate representation of edges in color images, in comparison to the simple color pixel difference classification of edges as seen in the CECH. Our proposed method is thus reliant on edge gradient information, and as such, we call this the Color Edge Gradient Co-occurrence Histogram (CEGCH). We use this as the main mechanism for our unconstrained color logo and trademark retrieval scheme. Results illustrate that the proposed retrieval system retrieves logos and trademarks with good accuracy, and outperforms the CECH object detection scheme with higher precision and recall.

  4. Burst Testing of Triaxial Braided Composite Tubes

    NASA Technical Reports Server (NTRS)

    Salem, J. A.; Bail, J. L.; Wilmoth, N. G.; Ghosn, L. J.; Kohlman, L. W.; Roberts, G. D.; Martin, R. E.

    2014-01-01

    Applications using triaxial braided composites are limited by the materials transverse strength which is determined by the delamination capacity of unconstrained, free-edge tows. However, structural applications such as cylindrical tubes can be designed to minimize free edge effects and thus the strength with and without edge stresses is relevant to the design process. The transverse strength of triaxial braided composites without edge effects was determined by internally pressurizing tubes. In the absence of edge effects, the axial and transverse strength were comparable. In addition, notched specimens, which minimize the effect of unconstrained tow ends, were tested in a variety of geometries. Although the commonly tested notch geometries exhibited similar axial and transverse net section failure strength, significant dependence on notch configuration was observed. In the absence of unconstrained tows, failure ensues as a result of bias tow rotation, splitting, and fracture at cross-over regions.

  5. The unconstrained evolution of fast and efficient antibiotic-resistant bacterial genomes.

    PubMed

    Reding-Roman, Carlos; Hewlett, Mark; Duxbury, Sarah; Gori, Fabio; Gudelj, Ivana; Beardmore, Robert

    2017-01-30

    Evolutionary trajectories are constrained by trade-offs when mutations that benefit one life history trait incur fitness costs in other traits. As resistance to tetracycline antibiotics by increased efflux can be associated with an increase in length of the Escherichia coli chromosome of 10% or more, we sought costs of resistance associated with doxycycline. However, it was difficult to identify any because the growth rate (r), carrying capacity (K) and drug efflux rate of E. coli increased during evolutionary experiments where the species was exposed to doxycycline. Moreover, these improvements remained following drug withdrawal. We sought mechanisms for this seemingly unconstrained adaptation, particularly as these traits ought to trade-off according to rK selection theory. Using prokaryote and eukaryote microorganisms, including clinical pathogens, we show that r and K can trade-off, but need not, because of 'rK trade-ups'. r and K trade-off only in sufficiently carbon-rich environments where growth is inefficient. We then used E. coli ribosomal RNA (rRNA) knockouts to determine specific mutations, namely changes in rRNA operon (rrn) copy number, than can simultaneously maximize r and K. The optimal genome has fewer operons, and therefore fewer functional ribosomes, than the ancestral strain. It is, therefore, unsurprising for r-adaptation in the presence of a ribosome-inhibiting antibiotic, doxycycline, to also increase population size. We found two costs for this improvement: an elongated lag phase and the loss of stress protection genes.

  6. Sequencing of real-world samples using a microfabricated hybrid device having unconstrained straight separation channels.

    PubMed

    Liu, Shaorong; Elkin, Christopher; Kapur, Hitesh

    2003-11-01

    We describe a microfabricated hybrid device that consists of a microfabricated chip containing multiple twin-T injectors attached to an array of capillaries that serve as the separation channels. A new fabrication process was employed to create two differently sized round channels in a chip. Twin-T injectors were formed by the smaller round channels that match the bore of the separation capillaries and separation capillaries were incorporated to the injectors through the larger round channels that match the outer diameter of the capillaries. This allows for a minimum dead volume and provides a robust chip/capillary interface. This hybrid design takes full advantage, such as sample stacking and purification and uniform signal intensity profile, of the unique chip injection scheme for DNA sequencing while employing long straight capillaries for the separations. In essence, the separation channel length is optimized for both speed and resolution since it is unconstrained by chip size. To demonstrate the reliability and practicality of this hybrid device, we sequenced over 1000 real-world samples from Human Chromosome 5 and Ciona intestinalis, prepared at Joint Genome Institute. We achieved average Phred20 read of 675 bases in about 70 min with a success rate of 91%. For the similar type of samples on MegaBACE 1000, the average Phred20 read is about 550-600 bases in 120 min separation time with a success rate of about 80-90%.

  7. A Method of Integrating Aeroheating into Conceptual Reusable Launch Vehicle Design: Evaluation of Advanced Thermal Protection Techniques for Future Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Olds, John R.; Cowart, Kris

    2001-01-01

    A method for integrating Aeroheating analysis into conceptual reusable launch vehicle (RLV) design is presented in this thesis. This process allows for faster turn-around time to converge a RLV design through the advent of designing an optimized thermal protection system (TPS). It consists of the coupling and automation of four computer software packages: MINIVER, TPSX, TCAT, and ADS. MINIVER is an Aeroheating code that produces centerline radiation equilibrium temperatures, convective heating rates, and heat loads over simplified vehicle geometries. These include flat plates and swept cylinders that model wings and leading edges, respectively. TPSX is a NASA Ames material properties database that is available on the World Wide Web. The newly developed Thermal Calculation Analysis Tool (TCAT) uses finite difference methods to carry out a transient in-depth 1-D conduction analysis over the center mold line of the vehicle. This is used along with the Automated Design Synthesis (ADS) code to correctly size the vehicle's thermal protection system (TPS). The numerical optimizer ADS uses algorithms that solve constrained and unconstrained design problems. The resulting outputs for this process are TPS material types, unit thicknesses, and acreage percentages. TCAT was developed for several purposes. First, it provides a means to calculate the transient in-depth conduction seen by the surface of the TPS material that protects a vehicle during ascent and reentry. Along with the in-depth conduction, radiation from the surface of the material is calculated along with the temperatures at the backface and interior parts of the TPS material. Secondly, TCAT contributes added speed and automation to the overall design process. Another motivation in the development of TCAT is optimization. In some vehicles, the TPS accounts for a high percentage of the overall vehicle dry weight. Optimizing the weight of the TPS will thereby lower the percentage of the dry weight accounted for by the TPS. Also, this will lower the cost of the TPS and the overall cost of the vehicle.

  8. Sulcal set optimization for cortical surface registration.

    PubMed

    Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M

    2010-04-15

    Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.

  9. Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction.

    PubMed

    Fessler, J A; Booth, S D

    1999-01-01

    Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.

  10. Control pole placement relationships

    NASA Technical Reports Server (NTRS)

    Ainsworth, O. R.

    1982-01-01

    Using a simplified Large Space Structure (LSS) model, a technique was developed which gives algebraic relationships for the unconstrained poles. The relationships, which were obtained by this technique, are functions of the structural characteristics and the control gains. Extremely interesting relationships evolve for the case when the structural damping is zero. If the damping is zero, the constrained poles are uncoupled from the structural mode shapes. These relationships, which are derived for structural damping and without structural damping, provide new insight into the migration of the unconstrained poles for the CFPPS.

  11. Optimal apparent damping as a function of the bandwidth of an array of vibration absorbers.

    PubMed

    Vignola, Joseph; Glean, Aldo; Judge, John; Ryan, Teresa

    2013-08-01

    The transient response of a resonant structure can be altered by the attachment of one or more substantially smaller resonators. Considered here is a coupled array of damped harmonic oscillators whose resonant frequencies are distributed across a frequency band that encompasses the natural frequency of the primary structure. Vibration energy introduced to the primary structure, which has little to no intrinsic damping, is transferred into and trapped by the attached array. It is shown that, when the properties of the array are optimized to reduce the settling time of the primary structure's transient response, the apparent damping is approximately proportional to the bandwidth of the array (the span of resonant frequencies of the attached oscillators). Numerical simulations were conducted using an unconstrained nonlinear minimization algorithm to find system parameters that result in the fastest settling time. This minimization was conducted for a range of system characteristics including the overall bandwidth of the array, the ratio of the total array mass to that of the primary structure, and the distributions of mass, stiffness, and damping among the array elements. This paper reports optimal values of these parameters and demonstrates that the resulting minimum settling time decreases with increasing bandwidth.

  12. Application of Sequential Quadratic Programming to Minimize Smart Active Flap Rotor Hub Loads

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi; Leyland, Jane

    2014-01-01

    In an analytical study, SMART active flap rotor hub loads have been minimized using nonlinear programming constrained optimization methodology. The recently developed NLPQLP system (Schittkowski, 2010) that employs Sequential Quadratic Programming (SQP) as its core algorithm was embedded into a driver code (NLP10x10) specifically designed to minimize active flap rotor hub loads (Leyland, 2014). Three types of practical constraints on the flap deflections have been considered. To validate the current application, two other optimization methods have been used: i) the standard, linear unconstrained method, and ii) the nonlinear Generalized Reduced Gradient (GRG) method with constraints. The new software code NLP10x10 has been systematically checked out. It has been verified that NLP10x10 is functioning as desired. The following are briefly covered in this paper: relevant optimization theory; implementation of the capability of minimizing a metric of all, or a subset, of the hub loads as well as the capability of using all, or a subset, of the flap harmonics; and finally, solutions for the SMART rotor. The eventual goal is to implement NLP10x10 in a real-time wind tunnel environment.

  13. Overcoming free energy barriers using unconstrained molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Hénin, Jérôme; Chipot, Christophe

    2004-08-01

    Association of unconstrained molecular dynamics (MD) and the formalisms of thermodynamic integration and average force [Darve and Pohorille, J. Chem. Phys. 115, 9169 (2001)] have been employed to determine potentials of mean force. When implemented in a general MD code, the additional computational effort, compared to other standard, unconstrained simulations, is marginal. The force acting along a chosen reaction coordinate ξ is estimated from the individual forces exerted on the chemical system and accumulated as the simulation progresses. The estimated free energy derivative computed for small intervals of ξ is canceled by an adaptive bias to overcome the barriers of the free energy landscape. Evolution of the system along the reaction coordinate is, thus, limited by its sole self-diffusion properties. The illustrative examples of the reversible unfolding of deca-L-alanine, the association of acetate and guanidinium ions in water, the dimerization of methane in water, and its transfer across the water liquid-vapor interface are examined to probe the efficiency of the method.

  14. Overcoming free energy barriers using unconstrained molecular dynamics simulations.

    PubMed

    Hénin, Jérôme; Chipot, Christophe

    2004-08-15

    Association of unconstrained molecular dynamics (MD) and the formalisms of thermodynamic integration and average force [Darve and Pohorille, J. Chem. Phys. 115, 9169 (2001)] have been employed to determine potentials of mean force. When implemented in a general MD code, the additional computational effort, compared to other standard, unconstrained simulations, is marginal. The force acting along a chosen reaction coordinate xi is estimated from the individual forces exerted on the chemical system and accumulated as the simulation progresses. The estimated free energy derivative computed for small intervals of xi is canceled by an adaptive bias to overcome the barriers of the free energy landscape. Evolution of the system along the reaction coordinate is, thus, limited by its sole self-diffusion properties. The illustrative examples of the reversible unfolding of deca-L-alanine, the association of acetate and guanidinium ions in water, the dimerization of methane in water, and its transfer across the water liquid-vapor interface are examined to probe the efficiency of the method. (c) 2004 American Institute of Physics.

  15. Controlling Healthcare Costs: Just Cost Effectiveness or "Just" Cost Effectiveness?

    PubMed

    Fleck, Leonard M

    2018-04-01

    Meeting healthcare needs is a matter of social justice. Healthcare needs are virtually limitless; however, resources, such as money, for meeting those needs, are limited. How then should we (just and caring citizens and policymakers in such a society) decide which needs must be met as a matter of justice with those limited resources? One reasonable response would be that we should use cost effectiveness as our primary criterion for making those choices. This article argues instead that cost-effectiveness considerations must be constrained by considerations of healthcare justice. The goal of this article will be to provide a preliminary account of how we might distinguish just from unjust or insufficiently just applications of cost-effectiveness analysis to some healthcare rationing problems; specifically, problems related to extraordinarily expensive targeted cancer therapies. Unconstrained compassionate appeals for resources for the medically least well-off cancer patients will be neither just nor cost effective.

  16. Multi-Objectivising Combinatorial Optimisation Problems by Means of Elementary Landscape Decompositions.

    PubMed

    Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A

    2018-02-15

    In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.

  17. Impact of isoprene and HONO chemistry on ozone and OVOC formation in a semirural South Korean forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Saewung; Kim, So-Young; Lee, Meehye

    Rapid urbanization and economic development in East Asia in past decades has led to photochemical air pollution problems such as excess photochemical ozone and aerosol formation. Asian megacities such as Seoul, Tokyo, Shanghai, Gangzhou, and Beijing are surrounded by densely forested areas and recent research has consistently demonstrated the importance of biogenic volatile organic compounds from vegetation in determining oxidation capacity in the suburban Asian megacity regions. Uncertainties in constraining tropospheric oxidation capacity, dominated by hydroxyl radical concentrations, undermine our ability to assess regional photochemical air pollution problems. We present an observational dataset of CO, NOX, SO2, ozone, HONO, andmore » VOCs (anthropogenic and biogenic) from Taehwa Research Forest (TRF) near the Seoul Metropolitan Area (SMA) in early June 2012. The data show that TRF is influenced both by aged pollution and fresh BVOC emissions. With the dataset, we diagnose HOx (OH, HO2, and RO2) distributions calculated with the University of Washington Chemical Box Model (UWCM v 2.1). Uncertainty from unconstrained HONO sources and radical recycling processes highlighted in recent studies is examined using multiple model simulations with different model constraints. The results suggest that 1) different model simulation scenarios cause systematic differences in HOX distributions especially OH levels (up to 2.5 times) and 2) radical destruction (HO2+HO2 or HO2+RO2) could be more efficient than radical recycling (HO2+NO) especially in the afternoon. Implications of the uncertainties in radical chemistry are discussed with respect to ozone-VOC-NOX sensitivity and oxidation product formation rates. Overall, the VOC limited regime in ozone photochemistry is predicted but the degree of sensitivity can significantly vary depending on the model scenarios. The model results also suggest that RO2 levels are positively correlated with OVOCs production that is not routinely constrained by observations. These unconstrained OVOCs can cause higher than expected OH loss rates (missing OH reactivity) and secondary organic aerosol formation. The series of modeling experiments constrained by observations strongly urge observational constraint of the radical pool to enable precise understanding of regional photochemical pollution problems in the East Asian megacity region.« less

  18. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization

    PubMed Central

    Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang

    2015-01-01

    It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence—with at most a linear convergence rate—because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method. PMID:26381742

  19. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization.

    PubMed

    Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang

    2015-01-01

    It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence--with at most a linear convergence rate--because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method.

  20. Computer aided design of monolithic microwave and millimeter wave integrated circuits and subsystems

    NASA Astrophysics Data System (ADS)

    Ku, Walter H.

    1989-05-01

    The objectives of this research are to develop analytical and computer aided design techniques for monolithic microwave and millimeter wave integrated circuits (MMIC and MIMIC) and subsystems and to design and fabricate those ICs. Emphasis was placed on heterojunction-based devices, especially the High Electron Mobility Transition (HEMT), for both low noise and medium power microwave and millimeter wave applications. Circuits to be considered include monolithic low noise amplifiers, power amplifiers, and distributed and feedback amplifiers. Interactive computer aided design programs were developed, which include large signal models of InP MISFETs and InGaAs HEMTs. Further, a new unconstrained optimization algorithm POSM was developed and implemented in the general Analysis and Design program for Integrated Circuit (ADIC) for assistance in the design of largesignal nonlinear circuits.

  1. Reflected stochastic differential equation models for constrained animal movement

    USGS Publications Warehouse

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  2. Learning to detect and combine the features of an object

    PubMed Central

    Suchow, Jordan W.; Pelli, Denis G.

    2013-01-01

    To recognize an object, it is widely supposed that we first detect and then combine its features. Familiar objects are recognized effortlessly, but unfamiliar objects—like new faces or foreign-language letters—are hard to distinguish and must be learned through practice. Here, we describe a method that separates detection and combination and reveals how each improves as the observer learns. We dissociate the steps by two independent manipulations: For each step, we do or do not provide a bionic crutch that performs it optimally. Thus, the two steps may be performed solely by the human, solely by the crutches, or cooperatively, when the human takes one step and a crutch takes the other. The crutches reveal a double dissociation between detecting and combining. Relative to the two-step ideal, the human observer’s overall efficiency for unconstrained identification equals the product of the efficiencies with which the human performs the steps separately. The two-step strategy is inefficient: Constraining the ideal to take two steps roughly halves its identification efficiency. In contrast, we find that humans constrained to take two steps perform just as well as when unconstrained, which suggests that they normally take two steps. Measuring threshold contrast (the faintness of a barely identifiable letter) as it improves with practice, we find that detection is inefficient and learned slowly. Combining is learned at a rate that is 4× higher and, after 1,000 trials, 7× more efficient. This difference explains much of the diversity of rates reported in perceptual learning studies, including effects of complexity and familiarity. PMID:23267067

  3. State transformations and Hamiltonian structures for optimal control in discrete systems

    NASA Astrophysics Data System (ADS)

    Sieniutycz, S.

    2006-04-01

    Preserving usual definition of Hamiltonian H as the scalar product of rates and generalized momenta we investigate two basic classes of discrete optimal control processes governed by the difference rather than differential equations for the state transformation. The first class, linear in the time interval θ, secures the constancy of optimal H and satisfies a discrete Hamilton-Jacobi equation. The second class, nonlinear in θ, does not assure the constancy of optimal H and satisfies only a relationship that may be regarded as an equation of Hamilton-Jacobi type. The basic question asked is if and when Hamilton's canonical structures emerge in optimal discrete systems. For a constrained discrete control, general optimization algorithms are derived that constitute powerful theoretical and computational tools when evaluating extremum properties of constrained physical systems. The mathematical basis is Bellman's method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage optimality criterion which allows a variation of the terminal state that is otherwise fixed in Bellman's method. For systems with unconstrained intervals of the holdup time θ two powerful optimization algorithms are obtained: an unconventional discrete algorithm with a constant H and its counterpart for models nonlinear in θ. We also present the time-interval-constrained extension of the second algorithm. The results are general; namely, one arrives at: discrete canonical equations of Hamilton, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory, along with basic results of variational calculus. A vast spectrum of applications and an example are briefly discussed with particular attention paid to models nonlinear in the time interval θ.

  4. Body stability and muscle and motor cortex activity during walking with wide stance

    PubMed Central

    Farrell, Brad J.; Bulgakova, Margarita A.; Beloozerova, Irina N.; Sirota, Mikhail G.

    2014-01-01

    Biomechanical and neural mechanisms of balance control during walking are still poorly understood. In this study, we examined the body dynamic stability, activity of limb muscles, and activity of motor cortex neurons [primarily pyramidal tract neurons (PTNs)] in the cat during unconstrained walking and walking with a wide base of support (wide-stance walking). By recording three-dimensional full-body kinematics we found for the first time that during unconstrained walking the cat is dynamically unstable in the forward direction during stride phases when only two diagonal limbs support the body. In contrast to standing, an increased lateral between-paw distance during walking dramatically decreased the cat's body dynamic stability in double-support phases and prompted the cat to spend more time in three-legged support phases. Muscles contributing to abduction-adduction actions had higher activity during stance, while flexor muscles had higher activity during swing of wide-stance walking. The overwhelming majority of neurons in layer V of the motor cortex, 82% and 83% in the forelimb and hindlimb representation areas, respectively, were active differently during wide-stance walking compared with unconstrained condition, most often by having a different depth of stride-related frequency modulation along with a different mean discharge rate and/or preferred activity phase. Upon transition from unconstrained to wide-stance walking, proximal limb-related neuronal groups subtly but statistically significantly shifted their activity toward the swing phase, the stride phase where most of body instability occurs during this task. The data suggest that the motor cortex participates in maintenance of body dynamic stability during locomotion. PMID:24790167

  5. Comparing kinematic changes between a finger-tapping task and unconstrained finger flexion-extension task in patients with Parkinson's disease.

    PubMed

    Teo, W P; Rodrigues, J P; Mastaglia, F L; Thickbroom, G W

    2013-06-01

    Repetitive finger tapping is a well-established clinical test for the evaluation of parkinsonian bradykinesia, but few studies have investigated other finger movement modalities. We compared the kinematic changes (movement rate and amplitude) and response to levodopa during a conventional index finger-thumb-tapping task and an unconstrained index finger flexion-extension task performed at maximal voluntary rate (MVR) for 20 s in 11 individuals with levodopa-responsive Parkinson's disease (OFF and ON) and 10 healthy age-matched controls. Between-task comparisons showed that for all conditions, the initial movement rate was greater for the unconstrained flexion-extension task than the tapping task. Movement rate in the OFF state was slower than in controls for both tasks and normalized in the ON state. The movement amplitude was also reduced for both tasks in OFF and increased in the ON state but did not reach control levels. The rate and amplitude of movement declined significantly for both tasks under all conditions (OFF/ON and controls). The time course of rate decline was comparable for both tasks and was similar in OFF/ON and controls, whereas the tapping task was associated with a greater decline in MA, both in controls and ON, but not OFF. The findings indicate that both finger movement tasks show similar kinematic changes during a 20-s sustained MVR, but that movement amplitude is less well sustained during the tapping task than the unconstrained finger movement task. Both movement rate and amplitude improved with levodopa; however, movement rate was more levodopa responsive than amplitude.

  6. Towards a ternary NIRS-BCI: single-trial classification of verbal fluency task, Stroop task and unconstrained rest

    NASA Astrophysics Data System (ADS)

    Schudlo, Larissa C.; Chau, Tom

    2015-12-01

    Objective. The majority of near-infrared spectroscopy (NIRS) brain-computer interface (BCI) studies have investigated binary classification problems. Limited work has considered differentiation of more than two mental states, or multi-class differentiation of higher-level cognitive tasks using measurements outside of the anterior prefrontal cortex. Improvements in accuracies are needed to deliver effective communication with a multi-class NIRS system. We investigated the feasibility of a ternary NIRS-BCI that supports mental states corresponding to verbal fluency task (VFT) performance, Stroop task performance, and unconstrained rest using prefrontal and parietal measurements. Approach. Prefrontal and parietal NIRS signals were acquired from 11 able-bodied adults during rest and performance of the VFT or Stroop task. Classification was performed offline using bagging with a linear discriminant base classifier trained on a 10 dimensional feature set. Main results. VFT, Stroop task and rest were classified at an average accuracy of 71.7% ± 7.9%. The ternary classification system provided a statistically significant improvement in information transfer rate relative to a binary system controlled by either mental task (0.87 ± 0.35 bits/min versus 0.73 ± 0.24 bits/min). Significance. These results suggest that effective communication can be achieved with a ternary NIRS-BCI that supports VFT, Stroop task and rest via measurements from the frontal and parietal cortices. Further development of such a system is warranted. Accurate ternary classification can enhance communication rates offered by NIRS-BCIs, improving the practicality of this technology.

  7. The ITSG-Grace2014 Gravity Field Model

    NASA Astrophysics Data System (ADS)

    Kvas, Andreas; Mayer-Gürr, Torsten; Zehenter, Norbert; Klinger, Beate

    2015-04-01

    The ITSG-Grace2014 GRACE-only gravity field model consists of a high resolution unconstrained static model (up to degree 200) with trend and annual signal, monthly unconstrained solutions with different spatial resolutions as well as daily snapshots derived by using a Kalman smoother. Apart from the estimated spherical harmonic coefficients, full variance-covariance matrices for the monthly solutions and the static gravity field component are provided. Compared to the previous release, multiple improvements in the processing chain are implemented: updated background models, better ionospheric modeling for GPS observations, an improved satellite attitude by combination of star camera and angular accelerations, estimation of K-band antenna center variations within the gravity field recovery process as well as error covariance function determination. Furthermore, daily gravity field variations have been modeled in the adjustment process to reduce errors caused by temporal leakage. This combined estimation of daily gravity variations field variations together with the static gravity field component represents a computational challenge due to the significantly increased parameter count. The modeling of daily variations up to a spherical harmonic degree of 40 for the whole GRACE observation period results in a system of linear equations with over 6 million unknown gravity field parameters. A least squares adjustment of this size is not solvable in a sensible time frame, therefore measures to reduce the problem size have to be taken. The ITSG-Grace2014 release is presented and selected parts of the processing chain and their effect on the estimated gravity field solutions are discussed.

  8. Using prior information to separate the temperature response to greenhouse gas forcing from that of aerosols - Estimating the transient climate response

    NASA Astrophysics Data System (ADS)

    Schurer, Andrew; Hegerl, Gabriele

    2016-04-01

    The evaluation of the transient climate response (TCR) is of critical importance to policy makers as it can be used to calculate a simple estimate of the expected warming given predicted greenhouse gas emissions. Previous studies using optimal detection techniques have been able to estimate a TCR value from the historic record using simulations from some of the models which took part in the Coupled Model Intercomparison Project Phase 5 (CMIP5) but have found that others give unconstrained results. At least partly this is due to degeneracy between the greenhouse gas and aerosol signals which makes separation of the temperature response to these forcings problematic. Here we re-visit this important topic by using an adapted optimal detection analysis within a Bayesian framework. We account for observational uncertainty by the use of an ensemble of instrumental observations, and model uncertainty by combining the results from several different models. This framework allows the use of prior information which is found to help separate the response to the different forcings leading to a more constrained estimate of TCR.

  9. Rate-Based Model Predictive Control of Turbofan Engine Clearance

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan A.

    2006-01-01

    An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.

  10. Slice-to-Volume Nonrigid Registration of Histological Sections to MR Images of the Human Brain

    PubMed Central

    Osechinskiy, Sergey; Kruggel, Frithjof

    2011-01-01

    Registration of histological images to three-dimensional imaging modalities is an important step in quantitative analysis of brain structure, in architectonic mapping of the brain, and in investigation of the pathology of a brain disease. Reconstruction of histology volume from serial sections is a well-established procedure, but it does not address registration of individual slices from sparse sections, which is the aim of the slice-to-volume approach. This study presents a flexible framework for intensity-based slice-to-volume nonrigid registration algorithms with a geometric transformation deformation field parametrized by various classes of spline functions: thin-plate splines (TPS), Gaussian elastic body splines (GEBS), or cubic B-splines. Algorithms are applied to cross-modality registration of histological and magnetic resonance images of the human brain. Registration performance is evaluated across a range of optimization algorithms and intensity-based cost functions. For a particular case of histological data, best results are obtained with a TPS three-dimensional (3D) warp, a new unconstrained optimization algorithm (NEWUOA), and a correlation-coefficient-based cost function. PMID:22567290

  11. A second-order unconstrained optimization method for canonical-ensemble density-functional methods

    NASA Astrophysics Data System (ADS)

    Nygaard, Cecilie R.; Olsen, Jeppe

    2013-03-01

    A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.

  12. Designing Waveform Sets with Good Correlation and Stopband Properties for MIMO Radar via the Gradient-Based Method

    PubMed Central

    Tang, Liang; Zhu, Yongfeng; Fu, Qiang

    2017-01-01

    Waveform sets with good correlation and/or stopband properties have received extensive attention and been widely used in multiple-input multiple-output (MIMO) radar. In this paper, we aim at designing unimodular waveform sets with good correlation and stopband properties. To formulate the problem, we construct two criteria to measure the correlation and stopband properties and then establish an unconstrained problem in the frequency domain. After deducing the phase gradient and the step size, an efficient gradient-based algorithm with monotonicity is proposed to minimize the objective function directly. For the design problem without considering the correlation weights, we develop a simplified algorithm, which only requires a few fast Fourier transform (FFT) operations and is more efficient. Because both of the algorithms can be implemented via the FFT operations and the Hadamard product, they are computationally efficient and can be used to design waveform sets with a large waveform number and waveform length. Numerical experiments show that the proposed algorithms can provide better performance than the state-of-the-art algorithms in terms of the computational complexity. PMID:28468308

  13. Designing Waveform Sets with Good Correlation and Stopband Properties for MIMO Radar via the Gradient-Based Method.

    PubMed

    Tang, Liang; Zhu, Yongfeng; Fu, Qiang

    2017-05-01

    Waveform sets with good correlation and/or stopband properties have received extensive attention and been widely used in multiple-input multiple-output (MIMO) radar. In this paper, we aim at designing unimodular waveform sets with good correlation and stopband properties. To formulate the problem, we construct two criteria to measure the correlation and stopband properties and then establish an unconstrained problem in the frequency domain. After deducing the phase gradient and the step size, an efficient gradient-based algorithm with monotonicity is proposed to minimize the objective function directly. For the design problem without considering the correlation weights, we develop a simplified algorithm, which only requires a few fast Fourier transform (FFT) operations and is more efficient. Because both of the algorithms can be implemented via the FFT operations and the Hadamard product, they are computationally efficient and can be used to design waveform sets with a large waveform number and waveform length. Numerical experiments show that the proposed algorithms can provide better performance than the state-of-the-art algorithms in terms of the computational complexity.

  14. Minimal norm constrained interpolation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Irvine, L. D.

    1985-01-01

    In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.

  15. Adaptive velocity-based six degree of freedom load control for real-time unconstrained biomechanical testing.

    PubMed

    Lawless, I M; Ding, B; Cazzolato, B S; Costi, J J

    2014-09-22

    Robotic biomechanics is a powerful tool for further developing our understanding of biological joints, tissues and their repair. Both velocity-based and hybrid force control methods have been applied to biomechanics but the complex and non-linear properties of joints have limited these to slow or stepwise loading, which may not capture the real-time behaviour of joints. This paper presents a novel force control scheme combining stiffness and velocity based methods aimed at achieving six degree of freedom unconstrained force control at physiological loading rates. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Resolving ice cloud optical thickness biases between CALIOP and MODIS using infrared retrievals

    NASA Astrophysics Data System (ADS)

    Holz, R. E.; Platnick, S.; Meyer, K.; Vaughan, M.; Heidinger, A.; Yang, P.; Wind, G.; Dutcher, S.; Ackerman, S.; Amarasinghe, N.; Nagle, F.; Wang, C.

    2015-10-01

    Despite its importance as one of the key radiative properties that determines the impact of upper tropospheric clouds on the radiation balance, ice cloud optical thickness (IOT) has proven to be one of the more challenging properties to retrieve from space-based remote sensing measurements. In particular, optically thin upper tropospheric ice clouds (cirrus) have been especially challenging due to their tenuous nature, extensive spatial scales, and complex particle shapes and light scattering characteristics. The lack of independent validation motivates the investigation presented in this paper, wherein systematic biases between MODIS Collection 5 (C5) and CALIOP Version 3 (V3) unconstrained retrievals of tenuous IOT (< 3) are examined using a month of collocated A-Train observations. An initial comparison revealed a factor of two bias between the MODIS and CALIOP IOT retrievals. This bias is investigated using an infrared (IR) radiative closure approach that compares both products with MODIS IR cirrus retrievals developed for this assessment. The analysis finds that both the MODIS C5 and the unconstrained CALIOP V3 retrievals are biased (high and low, respectively) relative to the IR IOT retrievals. Based on this finding, the MODIS and CALIOP algorithms are investigated with the goal of explaining and minimizing the biases relative to the IR. For MODIS we find that the assumed ice single scattering properties used for the C5 retrievals are not consistent with the mean IR COT distribution. The C5 ice scattering database results in the asymmetry parameter (g) varying as a function of effective radius with mean values that are too large. The MODIS retrievals have been brought into agreement with the IR by adopting a new ice scattering model for Collection 6 (C6) consisting of a modified gamma distribution comprised of a single habit (severely roughened aggregated columns); the C6 ice cloud optical property models have a constant g ~ 0.75 in the mid-visible spectrum, 5-15 % smaller than C5. For CALIOP, the assumed lidar ratio for unconstrained retrievals is fixed at 25 sr for the V3 data products. This value is found to be inconsistent with the constrained (predominantly nighttime) CALIOP retrievals. An experimental data set was produced using a modified lidar ratio of 32 sr for the unconstrained retrievals (an increase of 28 %), selected to provide consistency with the constrained V3 results. These modifications greatly improve the agreement with the IR and provide consistency between the MODIS and CALIOP products. Based on these results the recently released MODIS C6 optical products use the single habit distribution given above, while the upcoming CALIOP V4 unconstrained algorithm will use higher lidar ratios for unconstrained retrievals.

  17. Resolving Ice Cloud Optical Thickness Biases Between CALIOP and MODIS Using Infrared Retrievals

    NASA Technical Reports Server (NTRS)

    Holz, R. E.; Platnick, S.; Meyer, K.; Vaughan, M.; Heidinger, A.; Yang, P.; Wind, G.; Dutcher, S.; Ackerman, S.; Amarasinghe, N.; hide

    2015-01-01

    Despite its importance as one of the key radiative properties that determines the impact of upper tropospheric clouds on the radiation balance, ice cloud optical thickness (IOT) has proven to be one of the more challenging properties to retrieve from space-based remote sensing measurements. In particular, optically thin upper tropospheric ice clouds (cirrus) have been especially challenging due to their tenuous nature, extensive spatial scales, and complex particle shapes and light scattering characteristics. The lack of independent validation motivates the investigation presented in this paper, wherein systematic biases between MODIS Collection 5 (C5) and CALIOP Version 3 (V3) unconstrained retrievals of tenuous IOT (< 3) are examined using a month of collocated A-Train observations. An initial comparison revealed a factor of two bias between the MODIS and CALIOP IOT retrievals. This bias is investigated using an infrared (IR) radiative closure approach that compares both products with MODIS IR cirrus retrievals developed for this assessment. The analysis finds that both the MODIS C5 and the unconstrained CALIOP V3 retrievals are biased (high and low, respectively) relative to the IR IOT retrievals. Based on this finding, the MODIS and CALIOP algorithms are investigated with the goal of explaining and minimizing the biases relative to the IR. For MODIS we find that the assumed ice single scattering properties used for the C5 retrievals are not consistent with the mean IR COT distribution. The C5 ice scattering database results in the asymmetry parameter (g) varying as a function of effective radius with mean values that are too large. The MODIS retrievals have been brought into agreement with the IR by adopting a new ice scattering model for Collection 6 (C6) consisting of a modified gamma distribution comprised of a single habit (severely roughened aggregated columns); the C6 ice cloud optical property models have a constant g approx. = 0.75 in the mid-visible spectrum, 5-15% smaller than C5. For CALIOP, the assumed lidar ratio for unconstrained retrievals is fixed at 25 sr for the V3 data products.This value is found to be inconsistent with the constrained (predominantly nighttime) CALIOP retrievals. An experimental data set was produced using a modified lidar ratio of 32 sr for the unconstrained retrievals (an increase of 28%), selected to provide consistency with the constrained V3 results. These modifications greatly improve the agreement with the IR and provide consistency between the MODIS and CALIOP products. Based on these results the recently released MODIS C6 optical products use the single habit distribution given above, while the upcoming CALIOP V4 unconstrained algorithm will use higher lidar ratios for unconstrained retrievals.

  18. Resolving ice cloud optical thickness biases between CALIOP and MODIS using infrared retrievals

    NASA Astrophysics Data System (ADS)

    Holz, Robert E.; Platnick, Steven; Meyer, Kerry; Vaughan, Mark; Heidinger, Andrew; Yang, Ping; Wind, Gala; Dutcher, Steven; Ackerman, Steven; Amarasinghe, Nandana; Nagle, Fredrick; Wang, Chenxi

    2016-04-01

    Despite its importance as one of the key radiative properties that determines the impact of upper tropospheric clouds on the radiation balance, ice cloud optical thickness (IOT) has proven to be one of the more challenging properties to retrieve from space-based remote sensing measurements. In particular, optically thin upper tropospheric ice clouds (cirrus) have been especially challenging due to their tenuous nature, extensive spatial scales, and complex particle shapes and light-scattering characteristics. The lack of independent validation motivates the investigation presented in this paper, wherein systematic biases between MODIS Collection 5 (C5) and CALIOP Version 3 (V3) unconstrained retrievals of tenuous IOT (< 3) are examined using a month of collocated A-Train observations. An initial comparison revealed a factor of 2 bias between the MODIS and CALIOP IOT retrievals. This bias is investigated using an infrared (IR) radiative closure approach that compares both products with MODIS IR cirrus retrievals developed for this assessment. The analysis finds that both the MODIS C5 and the unconstrained CALIOP V3 retrievals are biased (high and low, respectively) relative to the IR IOT retrievals. Based on this finding, the MODIS and CALIOP algorithms are investigated with the goal of explaining and minimizing the biases relative to the IR. For MODIS we find that the assumed ice single-scattering properties used for the C5 retrievals are not consistent with the mean IR COT distribution. The C5 ice scattering database results in the asymmetry parameter (g) varying as a function of effective radius with mean values that are too large. The MODIS retrievals have been brought into agreement with the IR by adopting a new ice scattering model for Collection 6 (C6) consisting of a modified gamma distribution comprised of a single habit (severely roughened aggregated columns); the C6 ice cloud optical property models have a constant g ≈ 0.75 in the mid-visible spectrum, 5-15 % smaller than C5. For CALIOP, the assumed lidar ratio for unconstrained retrievals is fixed at 25 sr for the V3 data products. This value is found to be inconsistent with the constrained (predominantly nighttime) CALIOP retrievals. An experimental data set was produced using a modified lidar ratio of 32 sr for the unconstrained retrievals (an increase of 28 %), selected to provide consistency with the constrained V3 results. These modifications greatly improve the agreement with the IR and provide consistency between the MODIS and CALIOP products. Based on these results the recently released MODIS C6 optical products use the single-habit distribution given above, while the upcoming CALIOP V4 unconstrained algorithm will use higher lidar ratios for unconstrained retrievals.

  19. Inverse-optimized 3D conformal planning: Minimizing complexity while achieving equivalence with beamlet IMRT in multiple clinical sites

    PubMed Central

    Fraass, Benedick A.; Steers, Jennifer M.; Matuszak, Martha M.; McShan, Daniel L.

    2012-01-01

    Purpose: Inverse planned intensity modulated radiation therapy (IMRT) has helped many centers implement highly conformal treatment planning with beamlet-based techniques. The many comparisons between IMRT and 3D conformal (3DCRT) plans, however, have been limited because most 3DCRT plans are forward-planned while IMRT plans utilize inverse planning, meaning both optimization and delivery techniques are different. This work avoids that problem by comparing 3D plans generated with a unique inverse planning method for 3DCRT called inverse-optimized 3D (IO-3D) conformal planning. Since IO-3D and the beamlet IMRT to which it is compared use the same optimization techniques, cost functions, and plan evaluation tools, direct comparisons between IMRT and simple, optimized IO-3D plans are possible. Though IO-3D has some similarity to direct aperture optimization (DAO), since it directly optimizes the apertures used, IO-3D is specifically designed for 3DCRT fields (i.e., 1–2 apertures per beam) rather than starting with IMRT-like modulation and then optimizing aperture shapes. The two algorithms are very different in design, implementation, and use. The goals of this work include using IO-3D to evaluate how close simple but optimized IO-3D plans come to nonconstrained beamlet IMRT, showing that optimization, rather than modulation, may be the most important aspect of IMRT (for some sites). Methods: The IO-3D dose calculation and optimization functionality is integrated in the in-house 3D planning/optimization system. New features include random point dose calculation distributions, costlet and cost function capabilities, fast dose volume histogram (DVH) and plan evaluation tools, optimization search strategies designed for IO-3D, and an improved, reimplemented edge/octree calculation algorithm. The IO-3D optimization, in distinction to DAO, is designed to optimize 3D conformal plans (one to two segments per beam) and optimizes MLC segment shapes and weights with various user-controllable search strategies which optimize plans without beamlet or pencil beam approximations. IO-3D allows comparisons of beamlet, multisegment, and conformal plans optimized using the same cost functions, dose points, and plan evaluation metrics, so quantitative comparisons are straightforward. Here, comparisons of IO-3D and beamlet IMRT techniques are presented for breast, brain, liver, and lung plans. Results: IO-3D achieves high quality results comparable to beamlet IMRT, for many situations. Though the IO-3D plans have many fewer degrees of freedom for the optimization, this work finds that IO-3D plans with only one to two segments per beam are dosimetrically equivalent (or nearly so) to the beamlet IMRT plans, for several sites. IO-3D also reduces plan complexity significantly. Here, monitor units per fraction (MU/Fx) for IO-3D plans were 22%–68% less than that for the 1 cm × 1 cm beamlet IMRT plans and 72%–84% than the 0.5 cm × 0.5 cm beamlet IMRT plans. Conclusions: The unique IO-3D algorithm illustrates that inverse planning can achieve high quality 3D conformal plans equivalent (or nearly so) to unconstrained beamlet IMRT plans, for many sites. IO-3D thus provides the potential to optimize flat or few-segment 3DCRT plans, creating less complex optimized plans which are efficient and simple to deliver. The less complex IO-3D plans have operational advantages for scenarios including adaptive replanning, cases with interfraction and intrafraction motion, and pediatric patients. PMID:22755717

  20. Practical global oceanic state estimation

    NASA Astrophysics Data System (ADS)

    Wunsch, Carl; Heimbach, Patrick

    2007-06-01

    The problem of oceanographic state estimation, by means of an ocean general circulation model (GCM) and a multitude of observations, is described and contrasted with the meteorological process of data assimilation. In practice, all such methods reduce, on the computer, to forms of least-squares. The global oceanographic problem is at the present time focussed primarily on smoothing, rather than forecasting, and the data types are unlike meteorological ones. As formulated in the consortium Estimating the Circulation and Climate of the Ocean (ECCO), an automatic differentiation tool is used to calculate the so-called adjoint code of the GCM, and the method of Lagrange multipliers used to render the problem one of unconstrained least-squares minimization. Major problems today lie less with the numerical algorithms (least-squares problems can be solved by many means) than with the issues of data and model error. Results of ongoing calculations covering the period of the World Ocean Circulation Experiment, and including among other data, satellite altimetry from TOPEX/POSEIDON, Jason-1, ERS- 1/2, ENVISAT, and GFO, a global array of profiling floats from the Argo program, and satellite gravity data from the GRACE mission, suggest that the solutions are now useful for scientific purposes. Both methodology and applications are developing in a number of different directions.

  1. Free Swimming in Ground Effect

    NASA Astrophysics Data System (ADS)

    Cochran-Carney, Jackson; Wagenhoffer, Nathan; Zeyghami, Samane; Moored, Keith

    2017-11-01

    A free-swimming potential flow analysis of unsteady ground effect is conducted for two-dimensional airfoils via a method of images. The foils undergo a pure pitching motion about their leading edge, and the positions of the body in the streamwise and cross-stream directions are determined by the equations of motion of the body. It is shown that the unconstrained swimmer is attracted to a time-averaged position that is mediated by the flow interaction with the ground. The robustness of this fluid-mediated equilibrium position is probed by varying the non-dimensional mass, initial conditions and kinematic parameters of motion. Comparisons to the foil's fixed-motion counterpart are also made to pinpoint the effect that free swimming near the ground has on wake structures and the fluid-mediated forces over time. Optimal swimming regimes for near-boundary swimming are determined by examining asymmetric motions.

  2. An unsupervised video foreground co-localization and segmentation process by incorporating motion cues and frame features

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    Video foreground segmentation is one of the key problems in video processing. In this paper, we proposed a novel and fully unsupervised approach for foreground object co-localization and segmentation of unconstrained videos. We firstly compute both the actual edges and motion boundaries of the video frames, and then align them by their HOG feature maps. Then, by filling the occlusions generated by the aligned edges, we obtained more precise masks about the foreground object. Such motion-based masks could be derived as the motion-based likelihood. Moreover, the color-base likelihood is adopted for the segmentation process. Experimental Results show that our approach outperforms most of the State-of-the-art algorithms.

  3. Formation and decay of tetrazane derivatives--a Car-Parrinello molecular dynamics study.

    PubMed

    Nonnenberg, Christel; Frank, Irmgard

    2008-08-14

    The complications during flight 510 of the Ariane Project were ascribed to problems in the upper stage engine that employs the bipropellant monomethylhydrazine (MMH) and nitrogen tetroxide (NTO). This has led to the question what conditions or reactions possibly cause an uncontrolled behaviour in the combustion process of MMH/NTO. We use first-principles molecular dynamics to investigate the reactions of the hypergolic mixture in different chemical situations. It was possible to observe the ultrafast redox reaction between the reactants on the timescale of an unconstrained simulation. We show that electrostatic attraction is crucial for the understanding of this reaction. Besides a cold reaction preceding the ignition, a reaction path leading to the highly reactive compound dimethyltetrazane could be identified.

  4. The helpfulness of category labels in semi-supervised learning depends on category structure.

    PubMed

    Vong, Wai Keen; Navarro, Daniel J; Perfors, Amy

    2016-02-01

    The study of semi-supervised category learning has generally focused on how additional unlabeled information with given labeled information might benefit category learning. The literature is also somewhat contradictory, sometimes appearing to show a benefit to unlabeled information and sometimes not. In this paper, we frame the problem differently, focusing on when labels might be helpful to a learner who has access to lots of unlabeled information. Using an unconstrained free-sorting categorization experiment, we show that labels are useful to participants only when the category structure is ambiguous and that people's responses are driven by the specific set of labels they see. We present an extension of Anderson's Rational Model of Categorization that captures this effect.

  5. Improving approximate-optimized effective potentials by imposing exact conditions: Theory and applications to electronic statics and dynamics

    NASA Astrophysics Data System (ADS)

    Kurzweil, Yair; Head-Gordon, Martin

    2009-07-01

    We develop a method that can constrain any local exchange-correlation potential to preserve basic exact conditions. Using the method of Lagrange multipliers, we calculate for each set of given Kohn-Sham orbitals a constraint-preserving potential which is closest to the given exchange-correlation potential. The method is applicable to both the time-dependent (TD) and independent cases. The exact conditions that are enforced for the time-independent case are Galilean covariance, zero net force and torque, and Levy-Perdew virial theorem. For the time-dependent case we enforce translational covariance, zero net force, Levy-Perdew virial theorem, and energy balance. We test our method on the exchange (only) Krieger-Li-Iafrate (xKLI) approximate-optimized effective potential for both cases. For the time-independent case, we calculated the ground state properties of some hydrogen chains and small sodium clusters for some constrained xKLI potentials and Hartree-Fock (HF) exchange. The results (total energy, Kohn-Sham eigenvalues, polarizability, and hyperpolarizability) indicate that enforcing the exact conditions is not important for these cases. On the other hand, in the time-dependent case, constraining both energy balance and zero net force yields improved results relative to TDHF calculations. We explored the electron dynamics in small sodium clusters driven by cw laser pulses. For each laser pulse we compared calculations from TD constrained xKLI, TD partially constrained xKLI, and TDHF. We found that electron dynamics such as electron ionization and moment of inertia dynamics for the constrained xKLI are most similar to the TDHF results. Also, energy conservation is better by at least one order of magnitude with respect to the unconstrained xKLI. We also discuss the problems that arise in satisfying constraints in the TD case with a non-cw driving force.

  6. Improving approximate-optimized effective potentials by imposing exact conditions: Theory and applications to electronic statics and dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurzweil, Yair; Head-Gordon, Martin

    2009-07-15

    We develop a method that can constrain any local exchange-correlation potential to preserve basic exact conditions. Using the method of Lagrange multipliers, we calculate for each set of given Kohn-Sham orbitals a constraint-preserving potential which is closest to the given exchange-correlation potential. The method is applicable to both the time-dependent (TD) and independent cases. The exact conditions that are enforced for the time-independent case are Galilean covariance, zero net force and torque, and Levy-Perdew virial theorem. For the time-dependent case we enforce translational covariance, zero net force, Levy-Perdew virial theorem, and energy balance. We test our method on the exchangemore » (only) Krieger-Li-Iafrate (xKLI) approximate-optimized effective potential for both cases. For the time-independent case, we calculated the ground state properties of some hydrogen chains and small sodium clusters for some constrained xKLI potentials and Hartree-Fock (HF) exchange. The results (total energy, Kohn-Sham eigenvalues, polarizability, and hyperpolarizability) indicate that enforcing the exact conditions is not important for these cases. On the other hand, in the time-dependent case, constraining both energy balance and zero net force yields improved results relative to TDHF calculations. We explored the electron dynamics in small sodium clusters driven by cw laser pulses. For each laser pulse we compared calculations from TD constrained xKLI, TD partially constrained xKLI, and TDHF. We found that electron dynamics such as electron ionization and moment of inertia dynamics for the constrained xKLI are most similar to the TDHF results. Also, energy conservation is better by at least one order of magnitude with respect to the unconstrained xKLI. We also discuss the problems that arise in satisfying constraints in the TD case with a non-cw driving force.« less

  7. Structural brain correlates of unconstrained motor activity in people with schizophrenia.

    PubMed

    Farrow, Tom F D; Hunter, Michael D; Wilkinson, Iain D; Green, Russell D J; Spence, Sean A

    2005-11-01

    Avolition affects quality of life in chronic schizophrenia. We investigated the relationship between unconstrained motor activity and the volume of key executive brain regions in 16 male patients with schizophrenia. Wristworn actigraphy monitors were used to record motor activity over a 20 h period. Structural magnetic resonance imaging brain scans were parcellated and individual volumes for anterior cingulate cortex and dorsolateral prefrontal cortex extracted. Patients'total activity was positively correlated with volume of left anterior cingulate cortex. These data suggest that the volume of specific executive structures may affect (quantifiable) motor behaviours, having further implications for models of the 'will' and avolition.

  8. Unconstrained paving and plastering method for generating finite element meshes

    DOEpatents

    Staten, Matthew L.; Owen, Steven J.; Blacker, Teddy D.; Kerr, Robert

    2010-03-02

    Computer software for and a method of generating a conformal all quadrilateral or hexahedral mesh comprising selecting an object with unmeshed boundaries and performing the following while unmeshed voids are larger than twice a desired element size and unrecognizable as either a midpoint subdividable or pave-and-sweepable polyhedra: selecting a front to advance; based on sizes of fronts and angles with adjacent fronts, determining which adjacent fronts should be advanced with the selected front; advancing the fronts; detecting proximities with other nearby fronts; resolving any found proximities; forming quadrilaterals or unconstrained columns of hexahedra where two layers cross; and establishing hexahedral elements where three layers cross.

  9. Learning optimal eye movements to unusual faces

    PubMed Central

    Peterson, Matthew F.; Eckstein, Miguel P.

    2014-01-01

    Eye movements, which guide the fovea’s high resolution and computational power to relevant areas of the visual scene, are integral to efficient, successful completion of many visual tasks. How humans modify their eye movements through experience with their perceptual environments, and its functional role in learning new tasks, has not been fully investigated. Here, we used a face identification task where only the mouth discriminated exemplars to assess if, how, and when eye movement modulation may mediate learning. By interleaving trials of unconstrained eye movements with trials of forced fixation, we attempted to separate the contributions of eye movements and covert mechanisms to performance improvements. Without instruction, a majority of observers substantially increased accuracy and learned to direct their initial eye movements towards the optimal fixation point. The proximity of an observer’s default face identification eye movement behavior to the new optimal fixation point and the observer’s peripheral processing ability were predictive of performance gains and eye movement learning. After practice in a subsequent condition in which observers were directed to fixate different locations along the face, including the relevant mouth region, all observers learned to make eye movements to the optimal fixation point. In this fully learned state, augmented fixation strategy accounted for 43% of total efficiency improvements while covert mechanisms accounted for the remaining 57%. The findings suggest a critical role for eye movement planning to perceptual learning, and elucidate factors that can predict when and how well an observer can learn a new task with unusual exemplars. PMID:24291712

  10. Intervention in gene regulatory networks with maximal phenotype alteration.

    PubMed

    Yousefi, Mohammadmahdi R; Dougherty, Edward R

    2013-07-15

    A basic issue for translational genomics is to model gene interaction via gene regulatory networks (GRNs) and thereby provide an informatics environment to study the effects of intervention (say, via drugs) and to derive effective intervention strategies. Taking the view that the phenotype is characterized by the long-run behavior (steady-state distribution) of the network, we desire interventions to optimally move the probability mass from undesirable to desirable states Heretofore, two external control approaches have been taken to shift the steady-state mass of a GRN: (i) use a user-defined cost function for which desirable shift of the steady-state mass is a by-product and (ii) use heuristics to design a greedy algorithm. Neither approach provides an optimal control policy relative to long-run behavior. We use a linear programming approach to optimally shift the steady-state mass from undesirable to desirable states, i.e. optimization is directly based on the amount of shift and therefore must outperform previously proposed methods. Moreover, the same basic linear programming structure is used for both unconstrained and constrained optimization, where in the latter case, constraints on the optimization limit the amount of mass that may be shifted to 'ambiguous' states, these being states that are not directly undesirable relative to the pathology of interest but which bear some perceived risk. We apply the method to probabilistic Boolean networks, but the theory applies to any Markovian GRN. Supplementary materials, including the simulation results, MATLAB source code and description of suboptimal methods are available at http://gsp.tamu.edu/Publications/supplementary/yousefi13b. edward@ece.tamu.edu Supplementary data are available at Bioinformatics online.

  11. Allocating dissipation across a molecular machine cycle to maximize flux

    PubMed Central

    Brown, Aidan I.; Sivak, David A.

    2017-01-01

    Biomolecular machines consume free energy to break symmetry and make directed progress. Nonequilibrium ATP concentrations are the typical free energy source, with one cycle of a molecular machine consuming a certain number of ATP, providing a fixed free energy budget. Since evolution is expected to favor rapid-turnover machines that operate efficiently, we investigate how this free energy budget can be allocated to maximize flux. Unconstrained optimization eliminates intermediate metastable states, indicating that flux is enhanced in molecular machines with fewer states. When maintaining a set number of states, we show that—in contrast to previous findings—the flux-maximizing allocation of dissipation is not even. This result is consistent with the coexistence of both “irreversible” and reversible transitions in molecular machine models that successfully describe experimental data, which suggests that, in evolved machines, different transitions differ significantly in their dissipation. PMID:29073016

  12. A Machine Learns to Predict the Stability of Tightly Packed Planetary Systems

    NASA Astrophysics Data System (ADS)

    Tamayo, Daniel; Silburt, Ari; Valencia, Diana; Menou, Kristen; Ali-Dib, Mohamad; Petrovich, Cristobal; Huang, Chelsea X.; Rein, Hanno; van Laerhoven, Christa; Paradise, Adiv; Obertas, Alysa; Murray, Norman

    2016-12-01

    The requirement that planetary systems be dynamically stable is often used to vet new discoveries or set limits on unconstrained masses or orbital elements. This is typically carried out via computationally expensive N-body simulations. We show that characterizing the complicated and multi-dimensional stability boundary of tightly packed systems is amenable to machine-learning methods. We find that training an XGBoost machine-learning algorithm on physically motivated features yields an accurate classifier of stability in packed systems. On the stability timescale investigated (107 orbits), it is three orders of magnitude faster than direct N-body simulations. Optimized machine-learning classifiers for dynamical stability may thus prove useful across the discipline, e.g., to characterize the exoplanet sample discovered by the upcoming Transiting Exoplanet Survey Satellite. This proof of concept motivates investing computational resources to train algorithms capable of predicting stability over longer timescales and over broader regions of phase space.

  13. Modeling and quantification of repolarization feature dependency on heart rate.

    PubMed

    Minchole, A; Zacur, E; Pueyo, E; Laguna, P

    2014-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.

  14. Synchronization in dynamical networks with unconstrained structure switching

    NASA Astrophysics Data System (ADS)

    del Genio, Charo I.; Romance, Miguel; Criado, Regino; Boccaletti, Stefano

    2015-12-01

    We provide a rigorous solution to the problem of constructing a structural evolution for a network of coupled identical dynamical units that switches between specified topologies without constraints on their structure. The evolution of the structure is determined indirectly from a carefully built transformation of the eigenvector matrices of the coupling Laplacians, which are guaranteed to change smoothly in time. In turn, this allows one to extend the master stability function formalism, which can be used to assess the stability of a synchronized state. This approach is independent from the particular topologies that the network visits, and is not restricted to commuting structures. Also, it does not depend on the time scale of the evolution, which can be faster than, comparable to, or even secular with respect to the dynamics of the units.

  15. Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.

    PubMed

    Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela

    2016-12-01

    Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.

  16. Engine With Regression and Neural Network Approximators Designed

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.

    2001-01-01

    At the NASA Glenn Research Center, the NASA engine performance program (NEPP, ref. 1) and the design optimization testbed COMETBOARDS (ref. 2) with regression and neural network analysis-approximators have been coupled to obtain a preliminary engine design methodology. The solution to a high-bypass-ratio subsonic waverotor-topped turbofan engine, which is shown in the preceding figure, was obtained by the simulation depicted in the following figure. This engine is made of 16 components mounted on two shafts with 21 flow stations. The engine is designed for a flight envelope with 47 operating points. The design optimization utilized both neural network and regression approximations, along with the cascade strategy (ref. 3). The cascade used three algorithms in sequence: the method of feasible directions, the sequence of unconstrained minimizations technique, and sequential quadratic programming. The normalized optimum thrusts obtained by the three methods are shown in the following figure: the cascade algorithm with regression approximation is represented by a triangle, a circle is shown for the neural network solution, and a solid line indicates original NEPP results. The solutions obtained from both approximate methods lie within one standard deviation of the benchmark solution for each operating point. The simulation improved the maximum thrust by 5 percent. The performance of the linear regression and neural network methods as alternate engine analyzers was found to be satisfactory for the analysis and operation optimization of air-breathing propulsion engines (ref. 4).

  17. Hidden Markov random field model and Broyden-Fletcher-Goldfarb-Shanno algorithm for brain image segmentation

    NASA Astrophysics Data System (ADS)

    Guerrout, EL-Hachemi; Ait-Aoudia, Samy; Michelucci, Dominique; Mahiou, Ramdane

    2018-05-01

    Many routine medical examinations produce images of patients suffering from various pathologies. With the huge number of medical images, the manual analysis and interpretation became a tedious task. Thus, automatic image segmentation became essential for diagnosis assistance. Segmentation consists in dividing the image into homogeneous and significant regions. We focus on hidden Markov random fields referred to as HMRF to model the problem of segmentation. This modelisation leads to a classical function minimisation problem. Broyden-Fletcher-Goldfarb-Shanno algorithm referred to as BFGS is one of the most powerful methods to solve unconstrained optimisation problem. In this paper, we investigate the combination of HMRF and BFGS algorithm to perform the segmentation operation. The proposed method shows very good segmentation results comparing with well-known approaches. The tests are conducted on brain magnetic resonance image databases (BrainWeb and IBSR) largely used to objectively confront the results obtained. The well-known Dice coefficient (DC) was used as similarity metric. The experimental results show that, in many cases, our proposed method approaches the perfect segmentation with a Dice Coefficient above .9. Moreover, it generally outperforms other methods in the tests conducted.

  18. Broad-search algorithms for the spacecraft trajectory design of Callisto-Ganymede-Io triple flyby sequences from 2024 to 2040, Part II: Lambert pathfinding and trajectory solutions

    NASA Astrophysics Data System (ADS)

    Lynam, Alfred E.

    2014-01-01

    Triple-satellite-aided capture employs gravity-assist flybys of three of the Galilean moons of Jupiter in order to decrease the amount of ΔV required to capture a spacecraft into Jupiter orbit. Similarly, triple flybys can be used within a Jupiter satellite tour to rapidly modify the orbital parameters of a Jovicentric orbit, or to increase the number of science flybys. In order to provide a nearly comprehensive search of the solution space of Callisto-Ganymede-Io triple flybys from 2024 to 2040, a third-order, Chebyshev's method variant of the p-iteration solution to Lambert's problem is paired with a second-order, Newton-Raphson method, time of flight iteration solution to the V∞-matching problem. The iterative solutions of these problems provide the orbital parameters of the Callisto-Ganymede transfer, the Ganymede flyby, and the Ganymede-Io transfer, but the characteristics of the Callisto and Io flybys are unconstrained, so they are permitted to vary in order to produce an even larger number of trajectory solutions. The vast amount of solution data is searched to find the best triple-satellite-aided capture window between 2024 and 2040.

  19. David Adler Lectureship Award: n-point Correlation Functions in Heterogeneous Materials.

    NASA Astrophysics Data System (ADS)

    Torquato, Salvatore

    2009-03-01

    The determination of the bulk transport, electromagnetic, mechanical, and optical properties of heterogeneous materials has a long and venerable history, attracting the attention of some of the luminaries of science, including Maxwell, Lord Rayleigh, and Einstein. The bulk properties can be shown to depend rigorously upon infinite sets of various n-point correlation functions. Many different types of correlation functions arise, depending on the physics of the problem. A unified approach to characterize the microstructure and bulk properties of a large class of disordered materials is developed [S. Torquato, Random Heterogeneous Materials: Microstructure and Macroscopic Properties (Springer-Verlag, New York, 2002)]. This is accomplished via a canonical n-point function Hn from which one can derive exact analytical expressions for any microstructural function of interest. This microstructural information can then be used to estimate accurately the bulk properties of the material. Unlike homogeneous materials, seemingly different bulk properties (e.g., transport and mechanical properties) of a heterogeneous material can be linked to one another because of the common microstructure that they share. Such cross-property relations can be used to estimate one property given a measurement of another. A recently identified decorrelation principle, roughly speaking, refers to the phenomenon that unconstrained correlations that exist in low-dimensional disordered materials vanish as the space dimension becomes large. Among other results, this implies that in sufficiently high dimensions the densest spheres packings may be disordered (rather than ordered) [S. Torquato and F. H. Stillinger, ``New Conjectural Lower Bounds on the Optimal Density of Sphere Packings," Experimental Mathematics, 15, 307 (2006)].

  20. Thermo-mechanical behavior and structure of melt blown shape-memory polyurethane nonwovens.

    PubMed

    Safranski, David L; Boothby, Jennifer M; Kelly, Cambre N; Beatty, Kyle; Lakhera, Nishant; Frick, Carl P; Lin, Angela; Guldberg, Robert E; Griffis, Jack C

    2016-09-01

    New processing methods for shape-memory polymers allow for tailoring material properties for numerous applications. Shape-memory nonwovens have been previously electrospun, but melt blow processing has yet to be evaluated. In order to determine the process parameters affecting shape-memory behavior, this study examined the effect of air pressure and collector speed on the mechanical behavior and shape-recovery of shape-memory polyurethane nonwovens. Mechanical behavior was measured by dynamic mechanical analysis and tensile testing, and shape-recovery was measured by unconstrained and constrained recovery. Microstructure changes throughout the shape-memory cycle were also investigated by micro-computed tomography. It was found that increasing collector speed increases elastic modulus, ultimate strength and recovery stress of the nonwoven, but collector speed does not affect the failure strain or unconstrained recovery. Increasing air pressure decreases the failure strain and increases rubbery modulus and unconstrained recovery, but air pressure does not influence recovery stress. It was also found that during the shape-memory cycle, the connectivity density of the fibers upon recovery does not fully return to the initial values, accounting for the incomplete shape-recovery seen in shape-memory nonwovens. With these parameter to property relationships identified, shape-memory nonwovens can be more easily manufactured and tailored for specific applications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  2. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments

    PubMed Central

    Mossel, Annette

    2015-01-01

    In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1) user tracking for virtual and augmented reality applications, (2) handheld target tracking for tunneling and (3) machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m. PMID:26694388

  3. A smart health monitoring chair for nonintrusive measurement of biological signals.

    PubMed

    Baek, Hyun Jae; Chung, Gih Sung; Kim, Ko Keun; Park, Kwang Suk

    2012-01-01

    We developed nonintrusive methods for simultaneous electrocardiogram, photoplethysmogram, and ballistocardiogram measurements that do not require direct contact between instruments and bare skin. These methods were applied to the design of a diagnostic chair for unconstrained heart rate and blood pressure monitoring purposes. Our methods were operationalized through capacitively coupled electrodes installed in the chair back that include high-input impedance amplifiers, and conductive textiles installed in the seat for capacitive driven-right-leg circuit configuration that is capable of recording electrocardiogram information through clothing. Photoplethysmograms were measured through clothing using seat mounted sensors with specially designed amplifier circuits that vary in light intensity according to clothing type. Ballistocardiograms were recorded using a film type transducer material, polyvinylidenefluoride (PVDF), which was installed beneath the seat cover. By simultaneously measuring signals, beat-to-beat heart rates could be monitored even when electrocardiograms were not recorded due to movement artifacts. Beat-to-beat blood pressure was also monitored using unconstrained measurements of pulse arrival time and other physiological parameters, and our experimental results indicated that the estimated blood pressure tended to coincide with actual blood pressure measurements. This study demonstrates the feasibility of our method and device for biological signal monitoring through clothing for unconstrained long-term daily health monitoring that does not require user awareness and is not limited by physical activity.

  4. Gaussian Accelerated Molecular Dynamics in NAMD.

    PubMed

    Pang, Yui Tik; Miao, Yinglong; Wang, Yi; McCammon, J Andrew

    2017-01-10

    Gaussian accelerated molecular dynamics (GaMD) is a recently developed enhanced sampling technique that provides efficient free energy calculations of biomolecules. Like the previous accelerated molecular dynamics (aMD), GaMD allows for "unconstrained" enhanced sampling without the need to set predefined collective variables and so is useful for studying complex biomolecular conformational changes such as protein folding and ligand binding. Furthermore, because the boost potential is constructed using a harmonic function that follows Gaussian distribution in GaMD, cumulant expansion to the second order can be applied to recover the original free energy profiles of proteins and other large biomolecules, which solves a long-standing energetic reweighting problem of the previous aMD method. Taken together, GaMD offers major advantages for both unconstrained enhanced sampling and free energy calculations of large biomolecules. Here, we have implemented GaMD in the NAMD package on top of the existing aMD feature and validated it on three model systems: alanine dipeptide, the chignolin fast-folding protein, and the M 3 muscarinic G protein-coupled receptor (GPCR). For alanine dipeptide, while conventional molecular dynamics (cMD) simulations performed for 30 ns are poorly converged, GaMD simulations of the same length yield free energy profiles that agree quantitatively with those of 1000 ns cMD simulation. Further GaMD simulations have captured folding of the chignolin and binding of the acetylcholine (ACh) endogenous agonist to the M 3 muscarinic receptor. The reweighted free energy profiles are used to characterize the protein folding and ligand binding pathways quantitatively. GaMD implemented in the scalable NAMD is widely applicable to enhanced sampling and free energy calculations of large biomolecules.

  5. Unconstrained Capacities of Quantum Key Distribution and Entanglement Distillation for Pure-Loss Bosonic Broadcast Channels.

    PubMed

    Takeoka, Masahiro; Seshadreesan, Kaushik P; Wilde, Mark M

    2017-10-13

    We consider quantum key distribution (QKD) and entanglement distribution using a single-sender multiple-receiver pure-loss bosonic broadcast channel. We determine the unconstrained capacity region for the distillation of bipartite entanglement and secret key between the sender and each receiver, whenever they are allowed arbitrary public classical communication. A practical implication of our result is that the capacity region demonstrated drastically improves upon rates achievable using a naive time-sharing strategy, which has been employed in previously demonstrated network QKD systems. We show a simple example of a broadcast QKD protocol overcoming the limit of the point-to-point strategy. Our result is thus an important step toward opening a new framework of network channel-based quantum communication technology.

  6. Gaussian Accelerated Molecular Dynamics: Unconstrained Enhanced Sampling and Free Energy Calculation.

    PubMed

    Miao, Yinglong; Feher, Victoria A; McCammon, J Andrew

    2015-08-11

    A Gaussian accelerated molecular dynamics (GaMD) approach for simultaneous enhanced sampling and free energy calculation of biomolecules is presented. By constructing a boost potential that follows Gaussian distribution, accurate reweighting of the GaMD simulations is achieved using cumulant expansion to the second order. Here, GaMD is demonstrated on three biomolecular model systems: alanine dipeptide, chignolin folding, and ligand binding to the T4-lysozyme. Without the need to set predefined reaction coordinates, GaMD enables unconstrained enhanced sampling of these biomolecules. Furthermore, the free energy profiles obtained from reweighting of the GaMD simulations allow us to identify distinct low-energy states of the biomolecules and characterize the protein-folding and ligand-binding pathways quantitatively.

  7. Gaussian Accelerated Molecular Dynamics: Unconstrained Enhanced Sampling and Free Energy Calculation

    PubMed Central

    2016-01-01

    A Gaussian accelerated molecular dynamics (GaMD) approach for simultaneous enhanced sampling and free energy calculation of biomolecules is presented. By constructing a boost potential that follows Gaussian distribution, accurate reweighting of the GaMD simulations is achieved using cumulant expansion to the second order. Here, GaMD is demonstrated on three biomolecular model systems: alanine dipeptide, chignolin folding, and ligand binding to the T4-lysozyme. Without the need to set predefined reaction coordinates, GaMD enables unconstrained enhanced sampling of these biomolecules. Furthermore, the free energy profiles obtained from reweighting of the GaMD simulations allow us to identify distinct low-energy states of the biomolecules and characterize the protein-folding and ligand-binding pathways quantitatively. PMID:26300708

  8. Numerical modelling of bifurcation and localisation in cohesive-frictional materials

    NASA Astrophysics Data System (ADS)

    de Borst, René

    1991-12-01

    Methods are reviewed for analysing highly localised failure and bifurcation modes in discretised mechanical systems as typically arise in numerical simulations of failure in soils, rocks, metals and concrete. By the example of a plane-strain biaxial test it is shown that strain softening and lack of normality in elasto-plastic constitutive equations and the ensuing loss of ellipticity of the governing field equations cause a pathological mesh dependence of numerical solutions for such problems, thus rendering the results effectively meaningless. The need for introduction of higher-order continuum models is emphasised to remedy this shortcoming of the conventional approach. For one such a continuum model, namely the unconstrained Cosserat continuum, it is demonstrated that meaningful and convergent solutions (in the sense that a finite width of the localisation zone is computed upon mesh refinement) can be obtained.

  9. The Evolution of the Intergalactic Medium

    NASA Astrophysics Data System (ADS)

    McQuinn, Matthew

    2016-09-01

    The bulk of cosmic matter resides in a dilute reservoir that fills the space between galaxies, the intergalactic medium (IGM). The history of this reservoir is intimately tied to the cosmic histories of structure formation, star formation, and supermassive black hole accretion. Our models for the IGM at intermediate redshifts (2≲z≲5) are a tremendous success, quantitatively explaining the statistics of Lyα absorption of intergalactic hydrogen. However, at both lower and higher redshifts (and around galaxies) much is still unknown about the IGM. We review the theoretical models and measurements that form the basis for the modern understanding of the IGM, and we discuss unsolved puzzles (ranging from the largely unconstrained process of reionization at high z to the missing baryon problem at low z), highlighting the efforts that have the potential to solve them.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaitsgory, Vladimir, E-mail: vladimir.gaitsgory@mq.edu.au; Rossomakhine, Sergey, E-mail: serguei.rossomakhine@flinders.edu.au

    The paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed (SP) optimal controls problems (that is, problems of optimal control of SP systems) considered on the infinite time horizon. We mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well. Our consideration is based on earlier results on averaging of SP control systems and on linear programming formulations of optimal control problems. The idea that we exploit is to first asymptotically approximate a given problem ofmore » optimal control of the SP system by a certain averaged optimal control problem, then reformulate this averaged problem as an infinite-dimensional linear programming (LP) problem, and then approximate the latter by semi-infinite LP problems. We show that the optimal solution of these semi-infinite LP problems and their duals (that can be found with the help of a modification of an available LP software) allow one to construct near optimal controls of the SP system. We demonstrate the construction with two numerical examples.« less

  11. Performance of Grey Wolf Optimizer on large scale problems

    NASA Astrophysics Data System (ADS)

    Gupta, Shubham; Deep, Kusum

    2017-01-01

    For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.

  12. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  13. Infrared and visible fusion face recognition based on NSCT domain

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-01-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.

  14. Feed Forward Neural Network and Optimal Control Problem with Control and State Constraints

    NASA Astrophysics Data System (ADS)

    Kmet', Tibor; Kmet'ová, Mária

    2009-09-01

    A feed forward neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The paper extends adaptive critic neural network architecture proposed by [5] to the optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  15. Sulfamethoxazole in poultry wastewater: Identification, treatability and degradation pathway determination in a membrane-photocatalytic slurry reactor.

    PubMed

    Asha, Raju C; Kumar, Mathava

    2015-01-01

    The presence of sulfamethoxazole (SMX) in a real-time poultry wastewater was identified via HPLC analysis. Subsequently, SMX removal from the poultry wastewater was investigated using a continuous-mode membrane-photocatalytic slurry reactor (MPSR). The real-time poultry wastewater was found to have an SMX concentration of 0-2.3 mg L(-1). A granular activated carbon supported TiO2 (GAC-TiO2) was synthesized, characterized and used in MPSR experiments. The optimal MPSR condition, i.e., HRT ∼ 125 min and catalyst dosage 529.3 mg L(-1), for complete SMX removal was found out using unconstrained optimization technique. Under the optimized condition, the effect of SMX concentration on MPSR performance was investigated by synthetic addition of SMX (i.e., 1, 25, 50, 75 and 100 mg L(-1)) into the wastewater. Interestingly, complete removals of total volatile solids (TVS), biochemical oxygen demand (BOD) and SMX were observed under all SMX concentrations investigated. However, a decline in SMX removal rate and proportionate increase in transmembrane-pressure (TMP) were observed when the SMX concentration was increased to higher levels. In the MPSR, the SMX mineralization was through one of the following degradation pathways: (i) fragmentation of the isoxazole ring and (ii) the elimination of methyl and amide moieties followed by the formation of phenyl sulfinate ion. These results show that the continuous-mode MPSR has great potential in the removal for SMX contaminated real-time poultry wastewater and similar organic micropollutants from wastewater.

  16. COPS: Large-scale nonlinearly constrained optimization problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bondarenko, A.S.; Bortz, D.M.; More, J.J.

    2000-02-10

    The authors have started the development of COPS, a collection of large-scale nonlinearly Constrained Optimization Problems. The primary purpose of this collection is to provide difficult test cases for optimization software. Problems in the current version of the collection come from fluid dynamics, population dynamics, optimal design, and optimal control. For each problem they provide a short description of the problem, notes on the formulation of the problem, and results of computational experiments with general optimization solvers. They currently have results for DONLP2, LANCELOT, MINOS, SNOPT, and LOQO.

  17. Constraints on geomagnetic secular variation modeling from electromagnetism and fluid dynamics of the Earth's core

    NASA Technical Reports Server (NTRS)

    Benton, E. R.

    1986-01-01

    A spherical harmonic representation of the geomagnetic field and its secular variation for epoch 1980, designated GSFC(9/84), is derived and evaluated. At three epochs (1977.5, 1980.0, 1982.5) this model incorporates conservation of magnetic flux through five selected patches of area on the core/mantle boundary bounded by the zero contours of vertical magnetic field. These fifteen nonlinear constraints are included like data in an iterative least squares parameter estimation procedure that starts with the recently derived unconstrained field model GSFC (12/83). Convergence is approached within three iterations. The constrained model is evaluated by comparing its predictive capability outside the time span of its data, in terms of residuals at magnetic observatories, with that for the unconstrained model.

  18. Automated acquisition system for routine, noninvasive monitoring of physiological data.

    PubMed

    Ogawa, M; Tamura, T; Togawa, T

    1998-01-01

    A fully automated, noninvasive data-acquisition system was developed to permit long-term measurement of physiological functions at home, without disturbing subjects' normal routines. The system consists of unconstrained monitors built into furnishings and structures in a home environment. An electrocardiographic (ECG) monitor in the bathtub measures heart function during bathing, a temperature monitor in the bed measures body temperature, and a weight monitor built into the toilet serves as a scale to record weight. All three monitors are connected to one computer and function with data-acquisition programs and a data format rule. The unconstrained physiological parameter monitors and fully automated measurement procedures collect data noninvasively without the subject's awareness. The system was tested for 1 week by a healthy male subject, aged 28, in laboratory-based facilities.

  19. 3D modeling of unconstrained HPT process: role of strain gradient on high deformed microstructure formation

    NASA Astrophysics Data System (ADS)

    Ben Kaabar, A.; Aoufi, A.; Descartes, S.; Desrayaud, C.

    2017-05-01

    During tribological contact’s life, different deformation paths lead to the formation of high deformed microstructure, in the near-surface layers of the bodies. The mechanical conditions (high pressure, shear) occurring under contact, are reproduced through unconstrained High Pressure Torsion configuration. A 3D finite element model of this HPT test is developed to study the local deformation history leading to high deformed microstructure with nominal pressure and friction coefficient. For the present numerical study the friction coefficient at the interface sample/anvils is kept constant at 0.3; the material used is high purity iron. The strain distribution in the sample bulk, as well as the main components of the strain gradients according to the spatial coordinates are investigated, with rotation angle of the anvil.

  20. Paleophysical oceanography with an emphasis on transport rates.

    PubMed

    Huybers, Peter; Wunsch, Carl

    2010-01-01

    Paleophysical oceanography is the study of the behavior of the fluid ocean of the past, with a specific emphasis on its climate implications, leading to a focus on the general circulation. Even if the circulation is not of primary concern, heavy reliance on deep-sea cores for past climate information means that knowledge of the oceanic state when the sediments were laid down is a necessity. Like the modern problem, paleoceanography depends heavily on observations, and central difficulties lie with the very limited data types and coverage that are, and perhaps ever will be, available. An approximate separation can be made into static descriptors of the circulation (e.g., its water-mass properties and volumes) and the more difficult problem of determining transport rates of mass and other properties. Determination of the circulation of the Last Glacial Maximum is used to outline some of the main challenges to progress. Apart from sampling issues, major difficulties lie with physical interpretation of the proxies, transferring core depths to an accurate timescale (the "age-model problem"), and understanding the accuracy of time-stepping oceanic or coupled-climate models when run unconstrained by observations. Despite the existence of many plausible explanatory scenarios, few features of the paleocirculation in any period are yet known with certainty.

  1. Tracking the visual focus of attention for a varying number of wandering people.

    PubMed

    Smith, Kevin; Ba, Sileye O; Odobez, Jean-Marc; Gatica-Perez, Daniel

    2008-07-01

    We define and address the problem of finding the visual focus of attention for a varying number of wandering people (VFOA-W), determining where the people's movement is unconstrained. VFOA-W estimation is a new and important problem with mplications for behavior understanding and cognitive science, as well as real-world applications. One such application, which we present in this article, monitors the attention passers-by pay to an outdoor advertisement. Our approach to the VFOA-W problem proposes a multi-person tracking solution based on a dynamic Bayesian network that simultaneously infers the (variable) number of people in a scene, their body locations, their head locations, and their head pose. For efficient inference in the resulting large variable-dimensional state-space we propose a Reversible Jump Markov Chain Monte Carlo (RJMCMC) sampling scheme, as well as a novel global observation model which determines the number of people in the scene and localizes them. We propose a Gaussian Mixture Model (GMM) and Hidden Markov Model (HMM)-based VFOA-W model which use head pose and location information to determine people's focus state. Our models are evaluated for tracking performance and ability to recognize people looking at an outdoor advertisement, with results indicating good performance on sequences where a moderate number of people pass in front of an advertisement.

  2. Volumetric depth peeling for medical image display

    NASA Astrophysics Data System (ADS)

    Borland, David; Clarke, John P.; Fielding, Julia R.; TaylorII, Russell M.

    2006-01-01

    Volumetric depth peeling (VDP) is an extension to volume rendering that enables display of otherwise occluded features in volume data sets. VDP decouples occlusion calculation from the volume rendering transfer function, enabling independent optimization of settings for rendering and occlusion. The algorithm is flexible enough to handle multiple regions occluding the object of interest, as well as object self-occlusion, and requires no pre-segmentation of the data set. VDP was developed as an improvement for virtual arthroscopy for the diagnosis of shoulder-joint trauma, and has been generalized for use in other simple and complex joints, and to enable non-invasive urology studies. In virtual arthroscopy, the surfaces in the joints often occlude each other, allowing limited viewpoints from which to evaluate these surfaces. In urology studies, the physician would like to position the virtual camera outside the kidney collecting system and see inside it. By rendering invisible all voxels between the observer's point of view and objects of interest, VDP enables viewing from unconstrained positions. In essence, VDP can be viewed as a technique for automatically defining an optimal data- and task-dependent clipping surface. Radiologists using VDP display have been able to perform evaluations of pathologies more easily and more rapidly than with clinical arthroscopy, standard volume rendering, or standard MRI/CT slice viewing.

  3. Wavefront Control Toolbox for James Webb Space Telescope Testbed

    NASA Technical Reports Server (NTRS)

    Shiri, Ron; Aronstein, David L.; Smith, Jeffery Scott; Dean, Bruce H.; Sabatke, Erin

    2007-01-01

    We have developed a Matlab toolbox for wavefront control of optical systems. We have applied this toolbox to the optical models of James Webb Space Telescope (JWST) in general and to the JWST Testbed Telescope (TBT) in particular, implementing both unconstrained and constrained wavefront optimization to correct for possible misalignments present on the segmented primary mirror or the monolithic secondary mirror. The optical models implemented in Zemax optical design program and information is exchanged between Matlab and Zemax via the Dynamic Data Exchange (DDE) interface. The model configuration is managed using the XML protocol. The optimization algorithm uses influence functions for each adjustable degree of freedom of the optical mode. The iterative and non-iterative algorithms have been developed to converge to a local minimum of the root-mean-square (rms) of wavefront error using singular value decomposition technique of the control matrix of influence functions. The toolkit is highly modular and allows the user to choose control strategies for the degrees of freedom to be adjusted on a given iteration and wavefront convergence criterion. As the influence functions are nonlinear over the control parameter space, the toolkit also allows for trade-offs between frequency of updating the local influence functions and execution speed. The functionality of the toolbox and the validity of the underlying algorithms have been verified through extensive simulations.

  4. Generalized bipartite quantum state discrimination problems with sequential measurements

    NASA Astrophysics Data System (ADS)

    Nakahira, Kenji; Kato, Kentaro; Usuda, Tsuyoshi Sasaki

    2018-02-01

    We investigate an optimization problem of finding quantum sequential measurements, which forms a wide class of state discrimination problems with the restriction that only local operations and one-way classical communication are allowed. Sequential measurements from Alice to Bob on a bipartite system are considered. Using the fact that the optimization problem can be formulated as a problem with only Alice's measurement and is convex programming, we derive its dual problem and necessary and sufficient conditions for an optimal solution. Our results are applicable to various practical optimization criteria, including the Bayes criterion, the Neyman-Pearson criterion, and the minimax criterion. In the setting of the problem of finding an optimal global measurement, its dual problem and necessary and sufficient conditions for an optimal solution have been widely used to obtain analytical and numerical expressions for optimal solutions. Similarly, our results are useful to obtain analytical and numerical expressions for optimal sequential measurements. Examples in which our results can be used to obtain an analytical expression for an optimal sequential measurement are provided.

  5. Impact of isoprene and HONO chemistry on ozone and OVOC formation in a semirural South Korean forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S.; Kim, S. -Y.; Lee, M.

    Rapid urbanization and economic development in East Asia in past decades has led to photochemical air pollution problems such as excess photochemical ozone and aerosol formation. Asian megacities such as Seoul, Tokyo, Shanghai, Guangzhou, and Beijing are surrounded by densely forested areas, and recent research has consistently demonstrated the importance of biogenic volatile organic compounds (VOCs) from vegetation in determining oxidation capacity in the suburban Asian megacity regions. Uncertainties in constraining tropospheric oxidation capacity, dominated by hydroxyl radical, undermine our ability to assess regional photochemical air pollution problems. We present an observational data set of CO, NO x, SO 2,more » ozone, HONO, and VOCs (anthropogenic and biogenic) from Taehwa research forest (TRF) near the Seoul metropolitan area in early June 2012. The data show that TRF is influenced both by aged pollution and fresh biogenic volatile organic compound emissions. With the data set, we diagnose HO x (OH, HO 2, and RO 2) distributions calculated using the University of Washington chemical box model (UWCM v2.1) with near-explicit VOC oxidation mechanisms from MCM v3.2 (Master Chemical Mechanism). Uncertainty from unconstrained HONO sources and radical recycling processes highlighted in recent studies is examined using multiple model simulations with different model constraints. The results suggest that (1) different model simulation scenarios cause systematic differences in HO x distributions, especially OH levels (up to 2.5 times), and (2) radical destruction (HO 2 + HO 2 or HO 2 + RO 2) could be more efficient than radical recycling (RO 2 + NO), especially in the afternoon. Implications of the uncertainties in radical chemistry are discussed with respect to ozone–VOC–NO x sensitivity and VOC oxidation product formation rates. Overall, the NO x limited regime is assessed except for the morning hours (8 a.m. to 12 p.m. local standard time), but the degree of sensitivity can significantly vary depending on the model scenarios. The model results also suggest that RO 2 levels are positively correlated with oxygenated VOCs (OVOCs) production that is not routinely constrained by observations. These unconstrained OVOCs can cause higher-than-expected OH loss rates (missing OH reactivity) and secondary organic aerosol formation. The series of modeling experiments constrained by observations strongly urge observational constraint of the radical pool to enable precise understanding of regional photochemical pollution problems in the East Asian megacity region.« less

  6. Impact of isoprene and HONO chemistry on ozone and OVOC formation in a semirural South Korean forest

    DOE PAGES

    Kim, S.; Kim, S. -Y.; Lee, M.; ...

    2015-04-29

    Rapid urbanization and economic development in East Asia in past decades has led to photochemical air pollution problems such as excess photochemical ozone and aerosol formation. Asian megacities such as Seoul, Tokyo, Shanghai, Guangzhou, and Beijing are surrounded by densely forested areas, and recent research has consistently demonstrated the importance of biogenic volatile organic compounds (VOCs) from vegetation in determining oxidation capacity in the suburban Asian megacity regions. Uncertainties in constraining tropospheric oxidation capacity, dominated by hydroxyl radical, undermine our ability to assess regional photochemical air pollution problems. We present an observational data set of CO, NO x, SO 2,more » ozone, HONO, and VOCs (anthropogenic and biogenic) from Taehwa research forest (TRF) near the Seoul metropolitan area in early June 2012. The data show that TRF is influenced both by aged pollution and fresh biogenic volatile organic compound emissions. With the data set, we diagnose HO x (OH, HO 2, and RO 2) distributions calculated using the University of Washington chemical box model (UWCM v2.1) with near-explicit VOC oxidation mechanisms from MCM v3.2 (Master Chemical Mechanism). Uncertainty from unconstrained HONO sources and radical recycling processes highlighted in recent studies is examined using multiple model simulations with different model constraints. The results suggest that (1) different model simulation scenarios cause systematic differences in HO x distributions, especially OH levels (up to 2.5 times), and (2) radical destruction (HO 2 + HO 2 or HO 2 + RO 2) could be more efficient than radical recycling (RO 2 + NO), especially in the afternoon. Implications of the uncertainties in radical chemistry are discussed with respect to ozone–VOC–NO x sensitivity and VOC oxidation product formation rates. Overall, the NO x limited regime is assessed except for the morning hours (8 a.m. to 12 p.m. local standard time), but the degree of sensitivity can significantly vary depending on the model scenarios. The model results also suggest that RO 2 levels are positively correlated with oxygenated VOCs (OVOCs) production that is not routinely constrained by observations. These unconstrained OVOCs can cause higher-than-expected OH loss rates (missing OH reactivity) and secondary organic aerosol formation. The series of modeling experiments constrained by observations strongly urge observational constraint of the radical pool to enable precise understanding of regional photochemical pollution problems in the East Asian megacity region.« less

  7. A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo

    1996-01-01

    A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.

  8. Quantum Assisted Learning for Registration of MODIS Images

    NASA Astrophysics Data System (ADS)

    Pelissier, C.; Le Moigne, J.; Fekete, G.; Halem, M.

    2017-12-01

    The advent of the first large scale quantum annealer by D-Wave has led to an increased interest in quantum computing. However, the quantum annealing computer of the D-Wave is limited to either solving Quadratic Unconstrained Binary Optimization problems (QUBOs) or using the ground state sampling of an Ising system that can be produced by the D-Wave. These restrictions make it challenging to find algorithms to accelerate the computation of typical Earth Science applications. A major difficulty is that most applications have continuous real-valued parameters rather than binary. Here we present an exploratory study using the ground state sampling to train artificial neural networks (ANNs) to carry out image registration of MODIS images. The key idea to using the D-Wave to train networks is that the quantum chip behaves thermally like Boltzmann machines (BMs), and BMs are known to be successful at recognizing patterns in images. The ground state sampling of the D-Wave also depends on the dynamics of the adiabatic evolution and is subject to other non-thermal fluctuations, but the statistics are thought to be similar and ANNs tend to be robust under fluctuations. In light of this, the D-Wave ground state sampling is used to define a Boltzmann like generative model and is investigated to register MODIS images. Image intensities of MODIS images are transformed using a Discrete Cosine Transform and used to train a several layers network to learn how to align images to a reference image. The network layers consist of an initial sigmoid layer acting as a binary filter of the input followed by a strict binarization using Bernoulli sampling, and then fed into a Boltzmann machine. The output is then classified using a soft-max layer. Results are presented and discussed.

  9. Online sparse Gaussian process based human motion intent learning for an electrically actuated lower extremity exoskeleton.

    PubMed

    Long, Yi; Du, Zhi-Jiang; Chen, Chao-Feng; Dong, Wei; Wang, Wei-Dong

    2017-07-01

    The most important step for lower extremity exoskeleton is to infer human motion intent (HMI), which contributes to achieve human exoskeleton collaboration. Since the user is in the control loop, the relationship between human robot interaction (HRI) information and HMI is nonlinear and complicated, which is difficult to be modeled by using mathematical approaches. The nonlinear approximation can be learned by using machine learning approaches. Gaussian Process (GP) regression is suitable for high-dimensional and small-sample nonlinear regression problems. GP regression is restrictive for large data sets due to its computation complexity. In this paper, an online sparse GP algorithm is constructed to learn the HMI. The original training dataset is collected when the user wears the exoskeleton system with friction compensation to perform unconstrained movement as far as possible. The dataset has two kinds of data, i.e., (1) physical HRI, which is collected by torque sensors placed at the interaction cuffs for the active joints, i.e., knee joints; (2) joint angular position, which is measured by optical position sensors. To reduce the computation complexity of GP, grey relational analysis (GRA) is utilized to specify the original dataset and provide the final training dataset. Those hyper-parameters are optimized offline by maximizing marginal likelihood and will be applied into online GP regression algorithm. The HMI, i.e., angular position of human joints, will be regarded as the reference trajectory for the mechanical legs. To verify the effectiveness of the proposed algorithm, experiments are performed on a subject at a natural speed. The experimental results show the HMI can be obtained in real time, which can be extended and employed in the similar exoskeleton systems.

  10. Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.

    PubMed

    Chen, C W; Chen, D Z

    2001-11-01

    Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.

  11. Quasi-Newton parallel geometry optimization methods

    NASA Astrophysics Data System (ADS)

    Burger, Steven K.; Ayers, Paul W.

    2010-07-01

    Algorithms for parallel unconstrained minimization of molecular systems are examined. The overall framework of minimization is the same except for the choice of directions for updating the quasi-Newton Hessian. Ideally these directions are chosen so the updated Hessian gives steps that are same as using the Newton method. Three approaches to determine the directions for updating are presented: the straightforward approach of simply cycling through the Cartesian unit vectors (finite difference), a concurrent set of minimizations, and the Lanczos method. We show the importance of using preconditioning and a multiple secant update in these approaches. For the Lanczos algorithm, an initial set of directions is required to start the method, and a number of possibilities are explored. To test the methods we used the standard 50-dimensional analytic Rosenbrock function. Results are also reported for the histidine dipeptide, the isoleucine tripeptide, and cyclic adenosine monophosphate. All of these systems show a significant speed-up with the number of processors up to about eight processors.

  12. Nonlinear aeroservoelastic analysis of a controlled multiple-actuated-wing model with free-play

    NASA Astrophysics Data System (ADS)

    Huang, Rui; Hu, Haiyan; Zhao, Yonghui

    2013-10-01

    In this paper, the effects of structural nonlinearity due to free-play in both leading-edge and trailing-edge outboard control surfaces on the linear flutter control system are analyzed for an aeroelastic model of three-dimensional multiple-actuated-wing. The free-play nonlinearities in the control surfaces are modeled theoretically by using the fictitious mass approach. The nonlinear aeroelastic equations of the presented model can be divided into nine sub-linear modal-based aeroelastic equations according to the different combinations of deflections of the leading-edge and trailing-edge outboard control surfaces. The nonlinear aeroelastic responses can be computed based on these sub-linear aeroelastic systems. To demonstrate the effects of nonlinearity on the linear flutter control system, a single-input and single-output controller and a multi-input and multi-output controller are designed based on the unconstrained optimization techniques. The numerical results indicate that the free-play nonlinearity can lead to either limit cycle oscillations or divergent motions when the linear control system is implemented.

  13. Evaluation of structural and thermophysical effects on the measurement accuracy of deep body thermometers based on dual-heat-flux method.

    PubMed

    Huang, Ming; Tamura, Toshiyo; Chen, Wenxi; Kanaya, Shigehiko

    2015-01-01

    To help pave a path toward the practical use of continuous unconstrained noninvasive deep body temperature measurement, this study aims to evaluate the structural and thermophysical effects on measurement accuracy for the dual-heat-flux method (DHFM). By considering the thermometer's height, radius, conductivity, density and specific heat as variables affecting the accuracy of DHFM measurement, we investigated the relationship between those variables and accuracy using 3-D models based on finite element method. The results of our simulation study show that accuracy is proportional to the radius but inversely proportional to the thickness of the thermometer when the radius is less than 30.0mm, and is also inversely proportional to the heat conductivity of the heat insulator inside the thermometer. The insights from this study would help to build a guideline for design, fabrication and optimization of DHFM-based thermometers, as well as their practical use. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Microbend fiber-optic temperature sensor

    DOEpatents

    Weiss, J.D.

    1995-05-30

    A temperature sensor is made of optical fiber into which quasi-sinusoidal microbends have been permanently introduced. In particular, the present invention includes a graded-index optical fiber directing steady light through a section of the optical fiber containing a plurality of permanent microbends. The microbend section of the optical fiber is contained in a thermally expansive sheath, attached to a thermally expansive structure, or attached to a bimetallic element undergoing temperature changes and being monitored. The microbend section is secured to the thermally expansive sheath which allows the amplitude of the microbends to decrease with temperature. The resultant increase in the optical fiber`s transmission thus allows temperature to be measured. The plural microbend section of the optical fiber is secured to the thermally expansive structure only at its ends and the microbends themselves are completely unconstrained laterally by any bonding agent to obtain maximum longitudinal temperature sensitivity. Although the permanent microbends reduce the transmission capabilities of fiber optics, the present invention utilizes this phenomenon as a transduction mechanism which is optimized to measure temperature. 5 figs.

  15. Microbend fiber-optic temperature sensor

    DOEpatents

    Weiss, Jonathan D.

    1995-01-01

    A temperature sensor is made of optical fiber into which quasi-sinusoidal microbends have been permanently introduced. In particular, the present invention includes a graded-index optical fiber directing steady light through a section of the optical fiber containing a plurality of permanent microbends. The microbend section of the optical fiber is contained in a thermally expansive sheath, attached to a thermally expansive structure, or attached to a bimetallic element undergoing temperature changes and being monitored. The microbend section is secured to the thermally expansive sheath which allows the amplitude of the microbends to decrease with temperature. The resultant increase in the optical fiber's transmission thus allows temperature to be measured. The plural microbend section of the optical fiber is secured to the thermally expansive structure only at its ends and the microbends themselves are completely unconstrained laterally by any bonding agent to obtain maximum longitudinal temperature sensitivity. Although the permanent microbends reduce the transmission capabilities of fiber optics, the present invention utilizes this phenomenon as a transduction mechanism which is optimized to measure temperature.

  16. Algorithms for bilevel optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.

  17. Near-Optimal Guidance Method for Maximizing the Reachable Domain of Gliding Aircraft

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takeshi

    This paper proposes a guidance method for gliding aircraft by using onboard computers to calculate a near-optimal trajectory in real-time, and thereby expanding the reachable domain. The results are applicable to advanced aircraft and future space transportation systems that require high safety. The calculation load of the optimal control problem that is used to maximize the reachable domain is too large for current computers to calculate in real-time. Thus the optimal control problem is divided into two problems: a gliding distance maximization problem in which the aircraft motion is limited to a vertical plane, and an optimal turning flight problem in a horizontal direction. First, the former problem is solved using a shooting method. It can be solved easily because its scale is smaller than that of the original problem, and because some of the features of the optimal solution are obtained in the first part of this paper. Next, in the latter problem, the optimal bank angle is computed from the solution of the former; this is an analytical computation, rather than an iterative computation. Finally, the reachable domain obtained from the proposed near-optimal guidance method is compared with that obtained from the original optimal control problem.

  18. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    NASA Technical Reports Server (NTRS)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  19. On the effects of grid ill-conditioning in three dimensional finite element vector potential magnetostatic field computations

    NASA Technical Reports Server (NTRS)

    Wang, R.; Demerdash, N. A.

    1990-01-01

    The effects of finite element grid geometries and associated ill-conditioning were studied in single medium and multi-media (air-iron) three dimensional magnetostatic field computation problems. The sensitivities of these 3D field computations to finite element grid geometries were investigated. It was found that in single medium applications the unconstrained magnetic vector potential curl-curl formulation in conjunction with first order finite elements produce global results which are almost totally insensitive to grid geometries. However, it was found that in multi-media (air-iron) applications first order finite element results are sensitive to grid geometries and consequent elemental shape ill-conditioning. These sensitivities were almost totally eliminated by means of the use of second order finite elements in the field computation algorithms. Practical examples are given in this paper to demonstrate these aspects mentioned above.

  20. Localization of an Underwater Control Network Based on Quasi-Stable Adjustment.

    PubMed

    Zhao, Jianhu; Chen, Xinhua; Zhang, Hongmei; Feng, Jie

    2018-03-23

    There exists a common problem in the localization of underwater control networks that the precision of the absolute coordinates of known points obtained by marine absolute measurement is poor, and it seriously affects the precision of the whole network in traditional constraint adjustment. Therefore, considering that the precision of underwater baselines is good, we use it to carry out quasi-stable adjustment to amend known points before constraint adjustment so that the points fit the network shape better. In addition, we add unconstrained adjustment for quality control of underwater baselines, the observations of quasi-stable adjustment and constrained adjustment, to eliminate the unqualified baselines and improve the results' accuracy of the two adjustments. Finally, the modified method is applied to a practical LBL (Long Baseline) experiment and obtains a mean point location precision of 0.08 m, which improves by 38% compared with the traditional method.

  1. Localization of an Underwater Control Network Based on Quasi-Stable Adjustment

    PubMed Central

    Chen, Xinhua; Zhang, Hongmei; Feng, Jie

    2018-01-01

    There exists a common problem in the localization of underwater control networks that the precision of the absolute coordinates of known points obtained by marine absolute measurement is poor, and it seriously affects the precision of the whole network in traditional constraint adjustment. Therefore, considering that the precision of underwater baselines is good, we use it to carry out quasi-stable adjustment to amend known points before constraint adjustment so that the points fit the network shape better. In addition, we add unconstrained adjustment for quality control of underwater baselines, the observations of quasi-stable adjustment and constrained adjustment, to eliminate the unqualified baselines and improve the results’ accuracy of the two adjustments. Finally, the modified method is applied to a practical LBL (Long Baseline) experiment and obtains a mean point location precision of 0.08 m, which improves by 38% compared with the traditional method. PMID:29570627

  2. Direct Method Transcription for a Human-Class Translunar Injection Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Witzberger, Kevin E.; Zeiler, Tom

    2012-01-01

    This paper presents a new trajectory optimization software package developed in the framework of a low-to-high fidelity 3 degrees-of-freedom (DOF)/6-DOF vehicle simulation program named Mission Analysis Simulation Tool in Fortran (MASTIF) and its application to a translunar trajectory optimization problem. The functionality of the developed optimization package is implemented as a new "mode" in generalized settings to make it applicable for a general trajectory optimization problem. In doing so, a direct optimization method using collocation is employed for solving the problem. Trajectory optimization problems in MASTIF are transcribed to a constrained nonlinear programming (NLP) problem and solved with SNOPT, a commercially available NLP solver. A detailed description of the optimization software developed is provided as well as the transcription specifics for the translunar injection (TLI) problem. The analysis includes a 3-DOF trajectory TLI optimization and a 3-DOF vehicle TLI simulation using closed-loop guidance.

  3. Gaussian Accelerated Molecular Dynamics: Theory, Implementation, and Applications

    PubMed Central

    Miao, Yinglong; McCammon, J. Andrew

    2018-01-01

    A novel Gaussian Accelerated Molecular Dynamics (GaMD) method has been developed for simultaneous unconstrained enhanced sampling and free energy calculation of biomolecules. Without the need to set predefined reaction coordinates, GaMD enables unconstrained enhanced sampling of the biomolecules. Furthermore, by constructing a boost potential that follows a Gaussian distribution, accurate reweighting of GaMD simulations is achieved via cumulant expansion to the second order. The free energy profiles obtained from GaMD simulations allow us to identify distinct low energy states of the biomolecules and characterize biomolecular structural dynamics quantitatively. In this chapter, we present the theory of GaMD, its implementation in the widely used molecular dynamics software packages (AMBER and NAMD), and applications to the alanine dipeptide biomolecular model system, protein folding, biomolecular large-scale conformational transitions and biomolecular recognition. PMID:29720925

  4. Impact of DNA twist accumulation on progressive helical wrapping of torsionally constrained DNA.

    PubMed

    Li, Wei; Wang, Peng-Ye; Yan, Jie; Li, Ming

    2012-11-21

    DNA wrapping is an important mechanism for chromosomal DNA packaging in cells and viruses. Previous studies of DNA wrapping have been performed mostly on torsionally unconstrained DNA, while in vivo DNA is often under torsional constraint. In this study, we extend a previously proposed theoretical model for wrapping of torsionally unconstrained DNA to a new model including the contribution of DNA twist energy, which influences DNA wrapping drastically. In particular, due to accumulation of twist energy during DNA wrapping, it predicts a finite amount of DNA that can be wrapped on a helical spool. The predictions of the new model are tested by single-molecule study of DNA wrapping under torsional constraint using magnetic tweezers. The theoretical predictions and the experimental results are consistent with each other and their implications are discussed.

  5. Constrained evolution in numerical relativity

    NASA Astrophysics Data System (ADS)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  6. Intraoperative pulmonary embolism of Harrington rod during spinal surgery: the potential dangers of rod cutting.

    PubMed

    Aylott, Caspar E W; Hassan, Kamran; McNally, Donal; Webb, John K

    2006-12-01

    This is a case report and laboratory-based biomechanics study. The objective is to report the first case of Titanium rod embolisation during scoliosis surgery into the Pulmonary artery. To investigate the potential of an unconstrained cut Titanium rod fragment to cause wounding with reference to recognised weapons. Embolisation of a foreign body to the heart is rare. Bullet embolisation to the heart and lungs is infrequently reported in the last 80 years. Iatrogenic cases of foreign body embolisation are very rare. Fifty 1-2 cm segments of Titanium rod were cut in an unconstrained manner and a novel method was used to calculate velocity. A high-speed camera (6,000 frames/s) was used to further measure velocity and study projectile motion. The wounding potential was investigated using lambs liver, high-speed photography and local dissection. Rod velocities were measured in excess of 23 m s(-1). Rods were seen to tumble end-over-end with a maximum speed of 560 revolutions/s. The maximum kinetic energy was 0.61 J which is approximately 2% that of a crossbow. This is sufficient to cause significant liver damage. The degree of surface damage and internal disruption was influenced by the orientation of the rod fragment at impact. An unconstrained cut segment of a Titanium rod has a significant potential to wound. Precautions should be taken to avoid this potentially disastrous but preventable complication.

  7. Long-term Outcome of Unconstrained Primary Total Hip Arthroplasty in Ipsilateral Residual Poliomyelitis.

    PubMed

    Buttaro, Martín A; Slullitel, Pablo A; García Mansilla, Agustín M; Carlucci, Sofía; Comba, Fernando M; Zanotti, Gerardo; Piccaluga, Francisco

    2017-03-01

    Incapacitating articular sequelae in the hip joint have been described for patients with late effects of poliomyelitis. In these patients, total hip arthroplasty (THA) has been associated with a substantial rate of dislocation. This study was conducted to evaluate the long-term clinical and radiologic outcomes of unconstrained THA in this specific group of patients. The study included 6 patients with ipsilateral polio who underwent primary THA between 1985 and 2006. Patients with polio who underwent THA on the nonparalytic limb were excluded. Mean follow-up was 119.5 months (minimum, 84 months). Clinical outcomes were evaluated with the modified Harris Hip Score (mHHS) and the visual analog scale (VAS) pain score. Radiographs were examined to identify the cause of complications and determine the need for revision surgery. All patients showed significantly better functional results when preoperative and postoperative mHHS (67.58 vs 87.33, respectively; P=.002) and VAS pain score (7.66 vs 2, respectively; P=.0003) were compared. Although 2 cases of instability were diagnosed, only 1 patient needed acetabular revision as a result of component malpositioning. None of the patients had component loosening, osteolysis, or infection. Unconstrained THA in the affected limb of patients with poliomyelitis showed favorable long-term clinical results, with improved function and pain relief. Nevertheless, instability may be a more frequent complication in this group of patients compared with the general population. [Orthopedics. 2017; 40(2):e255-e261.]. Copyright 2016, SLACK Incorporated.

  8. Ethanol self-administration in serotonin transporter knockout mice: unconstrained demand and elasticity.

    PubMed

    Lamb, R J; Daws, L C

    2013-10-01

    Low serotonin function is associated with alcoholism, leading to speculation that increasing serotonin function could decrease ethanol consumption. Mice with one or two deletions of the serotonin transporter (SERT) gene have increased extracellular serotonin. To examine the relationship between SERT genotype and motivation for alcohol, we compared ethanol self-administration in mice with zero (knockout, KO), one (HET) or two copies (WT) of the SERT gene. All three genotypes learned to self-administer ethanol. The SSRI, fluvoxamine, decreased responding for ethanol in the HET and WT, but not the KO mice. When tested under a progressive ratio schedule, KO mice had lower breakpoints than HET or WT. As work requirements were increased across sessions, behavioral economic analysis of ethanol self-administration indicated that the decreased breakpoint in KO as compared to HET or WT mice was a result of lower levels of unconstrained demand, rather than differences in elasticity, i.e. the proportional decreases in ethanol earned with increasing work requirements were similar across genotypes. The difference in unconstrained demand was unlikely to result from motor or general motivational factors, as both WT and KO mice responded at high levels for a 50% condensed milk solution. As elasticity is hypothesized to measure essential value, these results indicate that KO value ethanol similarly to WT or HET mice despite having lower break points for ethanol. © 2013 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.

  9. Assessment of patient functional performance in different knee arthroplasty designs during unconstrained squat

    PubMed Central

    Verdini, Federica; Zara, Claudio; Leo, Tommaso; Mengarelli, Alessandro; Cardarelli, Stefano; Innocenti, Bernardo

    2017-01-01

    Summary Background In this paper, squat named by Authors unconstrained because performed without constrains related to feet position, speed, knee maximum angle to be reached, was tested as motor task revealing differences in functional performance after knee arthroplasty. It involves large joints ranges of motion, does not compromise joint safety and requires accurate control strategies to maintain balance. Methods Motion capture techniques were used to study squat on a healthy control group (CTR) and on three groups, each characterised by a specific knee arthroplasty design: a Total Knee Arthroplasty (TKA), a Mobile Bearing and a Fixed Bearing Unicompartmental Knee Arthroplasty (respectively MBUA and FBUA). Squat was analysed during descent, maintenance and ascent phase and described by speed, angular kinematics of lower and upper body, the Center of Pressure (CoP) trajectory and muscle activation timing of quadriceps and biceps femoris. Results Compared to CTR, for TKA and MBUA knee maximum flexion was lower, vertical speed during descent and ascent reduced and the duration of whole movement was longer. CoP mean distance was higher for all arthroplasty groups during descent as higher was, CoP mean velocity for MBUA and TKA during ascent and descent. Conclusions Unconstrained squat is able to reveal differences in the functional performance among control and arthroplasty groups and between different arthroplasty designs. Considering the similarity index calculated for the variables showing statistically significance, FBUA performance appears to be closest to that of the CTR group. Level of evidence III a. PMID:29387646

  10. Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard

    2002-01-01

    The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization is the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations as to the utility of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and truly discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented here. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization as well as the numerical noise and truly discrete variables present in the current example problem.

  11. Optimal recombination in genetic algorithms for flowshop scheduling problems

    NASA Astrophysics Data System (ADS)

    Kovalenko, Julia

    2016-10-01

    The optimal recombination problem consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. We prove NP-hardness of the optimal recombination for various variants of the flowshop scheduling problem with makespan criterion and criterion of maximum lateness. An algorithm for solving the optimal recombination problem for permutation flowshop problems is built, using enumeration of prefect matchings in a special bipartite graph. The algorithm is adopted for the classical flowshop scheduling problem and for the no-wait flowshop problem. It is shown that the optimal recombination problem for the permutation flowshop scheduling problem is solvable in polynomial time for almost all pairs of parent solutions as the number of jobs tends to infinity.

  12. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm.

    PubMed

    Deb, Kalyanmoy; Sinha, Ankur

    2010-01-01

    Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.

  13. Finite dimensional approximation of a class of constrained nonlinear optimal control problems

    NASA Technical Reports Server (NTRS)

    Gunzburger, Max D.; Hou, L. S.

    1994-01-01

    An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.

  14. LDRD Final Report: Global Optimization for Engineering Science Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HART,WILLIAM E.

    1999-12-01

    For a wide variety of scientific and engineering problems the desired solution corresponds to an optimal set of objective function parameters, where the objective function measures a solution's quality. The main goal of the LDRD ''Global Optimization for Engineering Science Problems'' was the development of new robust and efficient optimization algorithms that can be used to find globally optimal solutions to complex optimization problems. This SAND report summarizes the technical accomplishments of this LDRD, discusses lessons learned and describes open research issues.

  15. The Unconstrained Event Bulletin (UEB) for the IMS Seismic Network Spaning the Period May 15-28, 2010: a New Resource for Algorithm Development and Testing

    NASA Astrophysics Data System (ADS)

    Brogan, R.; Young, C. J.; Ballard, S.

    2017-12-01

    A major problem with developing new data processing algorithms for seismic event monitoring is the lack of standard, high-quality "ground-truth" data sets to test against. The unfortunate effect of this is that new algorithms are often developed and tested with new data sets, making comparison of algorithms difficult and subjective. In an effort towards resolving this problem, we have developed the Unconstrained Event Bulletin (UEB), a ground-truth data set from the International Monitoring System (IMS) primary and auxiliary seismic networks for a two-week period in May 2010. All UEB analysis was performed by the same expert, who has more than 30 years of experience analyzing seismic data for nuclear explosion monitoring. We used the most complete International Data Centre (IDC) analyst-review event bulletin (the Late Event Bulletin or LEB) as a starting point for this analysis. To make the UEB more complete, we relaxed the minimum event definite criteria to the level of a pair of P-type and an S-type phases at a single station and using azimuth/slowness as defining. To add even more events that our analyst recognized and didn't want to omit, in rare cases, events were constructed using only 1 P-phase. Perhaps most importantly, on average our analyst spent more than 60 hours per day of data, far more than was possible in the production of the LEB. The result of all this was that while the LEB had 2,101 LEB events for the 2-week time period, we ended up with 11,435 events in the UEB, an increase of over 400%. New events are located all over the world and include both earthquakes and manmade events such as mining explosions. Our intent is to make our UEB data set openly available for all researchers to use for testing detection, correlation, and location algorithms, thus making it much easier to objectively compare different research efforts. Acknowledgement: Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.

  16. Research on NC laser combined cutting optimization model of sheet metal parts

    NASA Astrophysics Data System (ADS)

    Wu, Z. Y.; Zhang, Y. L.; Li, L.; Wu, L. H.; Liu, N. B.

    2017-09-01

    The optimization problem for NC laser combined cutting of sheet metal parts was taken as the research object in this paper. The problem included two contents: combined packing optimization and combined cutting path optimization. In the problem of combined packing optimization, the method of “genetic algorithm + gravity center NFP + geometric transformation” was used to optimize the packing of sheet metal parts. In the problem of combined cutting path optimization, the mathematical model of cutting path optimization was established based on the parts cutting constraint rules of internal contour priority and cross cutting. The model played an important role in the optimization calculation of NC laser combined cutting.

  17. An Enhanced Memetic Algorithm for Single-Objective Bilevel Optimization Problems.

    PubMed

    Islam, Md Monjurul; Singh, Hemant Kumar; Ray, Tapabrata; Sinha, Ankur

    2017-01-01

    Bilevel optimization, as the name reflects, deals with optimization at two interconnected hierarchical levels. The aim is to identify the optimum of an upper-level  leader problem, subject to the optimality of a lower-level follower problem. Several problems from the domain of engineering, logistics, economics, and transportation have an inherent nested structure which requires them to be modeled as bilevel optimization problems. Increasing size and complexity of such problems has prompted active theoretical and practical interest in the design of efficient algorithms for bilevel optimization. Given the nested nature of bilevel problems, the computational effort (number of function evaluations) required to solve them is often quite high. In this article, we explore the use of a Memetic Algorithm (MA) to solve bilevel optimization problems. While MAs have been quite successful in solving single-level optimization problems, there have been relatively few studies exploring their potential for solving bilevel optimization problems. MAs essentially attempt to combine advantages of global and local search strategies to identify optimum solutions with low computational cost (function evaluations). The approach introduced in this article is a nested Bilevel Memetic Algorithm (BLMA). At both upper and lower levels, either a global or a local search method is used during different phases of the search. The performance of BLMA is presented on twenty-five standard test problems and two real-life applications. The results are compared with other established algorithms to demonstrate the efficacy of the proposed approach.

  18. Optimal Price Decision Problem for Simultaneous Multi-article Auction and Its Optimal Price Searching Method by Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Masuda, Kazuaki; Aiyoshi, Eitaro

    We propose a method for solving optimal price decision problems for simultaneous multi-article auctions. An auction problem, originally formulated as a combinatorial problem, determines both every seller's whether or not to sell his/her article and every buyer's which article(s) to buy, so that the total utility of buyers and sellers will be maximized. Due to the duality theory, we transform it equivalently into a dual problem in which Lagrange multipliers are interpreted as articles' transaction price. As the dual problem is a continuous optimization problem with respect to the multipliers (i.e., the transaction prices), we propose a numerical method to solve it by applying heuristic global search methods. In this paper, Particle Swarm Optimization (PSO) is used to solve the dual problem, and experimental results are presented to show the validity of the proposed method.

  19. Techniques for shuttle trajectory optimization

    NASA Technical Reports Server (NTRS)

    Edge, E. R.; Shieh, C. J.; Powers, W. F.

    1973-01-01

    The application of recently developed function-space Davidon-type techniques to the shuttle ascent trajectory optimization problem is discussed along with an investigation of the recently developed PRAXIS algorithm for parameter optimization. At the outset of this analysis, the major deficiency of the function-space algorithms was their potential storage problems. Since most previous analyses of the methods were with relatively low-dimension problems, no storage problems were encountered. However, in shuttle trajectory optimization, storage is a problem, and this problem was handled efficiently. Topics discussed include: the shuttle ascent model and the development of the particular optimization equations; the function-space algorithms; the operation of the algorithm and typical simulations; variable final-time problem considerations; and a modification of Powell's algorithm.

  20. The problem of self-disclosure in psychoanalysis.

    PubMed

    Meissner, W W

    2002-01-01

    The problem of self-disclosure is explored in relation to currently shifting paradigms of the nature of the analytic relation and analytic interaction. Relational and intersubjective perspectives emphasize the role of self-disclosure as not merely allowable, but as an essential facilitating aspect of the analytic dialogue, in keeping with the role of the analyst as a contributing partner in the process. At the opposite extreme, advocates of classical anonymity stress the importance of neutrality and abstinence. The paper seeks to chart a course between unconstrained self-disclosure and absolute anonymity, both of which foster misalliances. Self-disclosure is seen as at times contributory to the analytic process, and at times deleterious. The decision whether to self-disclose, what to disclose, and when and how, should be guided by the analyst's perspective on neutrality, conceived as a mental stance in which the analyst assesses and decides what, at any given point, seems to contribute to the analytic process and the patient's therapeutic benefit. The major risk in self-disclosure is the tendency to draw the analytic interaction into the real relation between analyst and patient, thus diminishing or distorting the therapeutic alliance, mitigating transference expression, and compromising therapeutic effectiveness.

  1. On l(1): Optimal decentralized performance

    NASA Technical Reports Server (NTRS)

    Sourlas, Dennis; Manousiouthakis, Vasilios

    1993-01-01

    In this paper, the Manousiouthakis parametrization of all decentralized stabilizing controllers is employed in mathematically formulating the l(sup 1) optimal decentralized controller synthesis problem. The resulting optimization problem is infinite dimensional and therefore not directly amenable to computations. It is shown that finite dimensional optimization problems that have value arbitrarily close to the infinite dimensional one can be constructed. Based on this result, an algorithm that solves the l(sup 1) decentralized performance problems is presented. A global optimization approach to the solution of the infinite dimensional approximating problems is also discussed.

  2. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  3. Time-domain finite elements in optimal control with application to launch-vehicle guidance. PhD. Thesis

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.

    1991-01-01

    A time-domain finite element method is developed for optimal control problems. The theory derived is general enough to handle a large class of problems including optimal control problems that are continuous in the states and controls, problems with discontinuities in the states and/or system equations, problems with control inequality constraints, problems with state inequality constraints, or problems involving any combination of the above. The theory is developed in such a way that no numerical quadrature is necessary regardless of the degree of nonlinearity in the equations. Also, the same shape functions may be employed for every problem because all strong boundary conditions are transformed into natural or weak boundary conditions. In addition, the resulting nonlinear algebraic equations are very sparse. Use of sparse matrix solvers allows for the rapid and accurate solution of very difficult optimization problems. The formulation is applied to launch-vehicle trajectory optimization problems, and results show that real-time optimal guidance is realizable with this method. Finally, a general problem solving environment is created for solving a large class of optimal control problems. The algorithm uses both FORTRAN and a symbolic computation program to solve problems with a minimum of user interaction. The use of symbolic computation eliminates the need for user-written subroutines which greatly reduces the setup time for solving problems.

  4. An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics

    NASA Technical Reports Server (NTRS)

    Baluja, Shumeet

    1995-01-01

    This report is a repository of the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2368 to 22040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility.

  5. Pareto-optimal reversed-phase chromatography separation of three insulin variants with a solubility constraint.

    PubMed

    Arkell, Karolina; Knutson, Hans-Kristian; Frederiksen, Søren S; Breil, Martin P; Nilsson, Bernt

    2018-01-12

    With the shift of focus of the regulatory bodies, from fixed process conditions towards flexible ones based on process understanding, model-based optimization is becoming an important tool for process development within the biopharmaceutical industry. In this paper, a multi-objective optimization study of separation of three insulin variants by reversed-phase chromatography (RPC) is presented. The decision variables were the load factor, the concentrations of ethanol and KCl in the eluent, and the cut points for the product pooling. In addition to the purity constraints, a solubility constraint on the total insulin concentration was applied. The insulin solubility is a function of the ethanol concentration in the mobile phase, and the main aim was to investigate the effect of this constraint on the maximal productivity. Multi-objective optimization was performed with and without the solubility constraint, and visualized as Pareto fronts, showing the optimal combinations of the two objectives productivity and yield for each case. Comparison of the constrained and unconstrained Pareto fronts showed that the former diverges when the constraint becomes active, because the increase in productivity with decreasing yield is almost halted. Consequently, we suggest the operating point at which the total outlet concentration of insulin reaches the solubility limit as the most suitable one. According to the results from the constrained optimizations, the maximal productivity on the C 4 adsorbent (0.41 kg/(m 3  column h)) is less than half of that on the C 18 adsorbent (0.87 kg/(m 3  column h)). This is partly caused by the higher selectivity between the insulin variants on the C 18 adsorbent, but the main reason is the difference in how the solubility constraint affects the processes. Since the optimal ethanol concentration for elution on the C 18 adsorbent is higher than for the C 4 one, the insulin solubility is also higher, allowing a higher pool concentration. An alternative method of finding the suggested operating point was also evaluated, and it was shown to give very satisfactory results for well-mapped Pareto fronts. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. The Fisher-Markov selector: fast selecting maximally separable feature subset for multiclass classification with applications to high-dimensional data.

    PubMed

    Cheng, Qiang; Zhou, Hongbo; Cheng, Jie

    2011-06-01

    Selecting features for multiclass classification is a critically important task for pattern recognition and machine learning applications. Especially challenging is selecting an optimal subset of features from high-dimensional data, which typically have many more variables than observations and contain significant noise, missing components, or outliers. Existing methods either cannot handle high-dimensional data efficiently or scalably, or can only obtain local optimum instead of global optimum. Toward the selection of the globally optimal subset of features efficiently, we introduce a new selector--which we call the Fisher-Markov selector--to identify those features that are the most useful in describing essential differences among the possible groups. In particular, in this paper we present a way to represent essential discriminating characteristics together with the sparsity as an optimization objective. With properly identified measures for the sparseness and discriminativeness in possibly high-dimensional settings, we take a systematic approach for optimizing the measures to choose the best feature subset. We use Markov random field optimization techniques to solve the formulated objective functions for simultaneous feature selection. Our results are noncombinatorial, and they can achieve the exact global optimum of the objective function for some special kernels. The method is fast; in particular, it can be linear in the number of features and quadratic in the number of observations. We apply our procedure to a variety of real-world data, including mid--dimensional optical handwritten digit data set and high-dimensional microarray gene expression data sets. The effectiveness of our method is confirmed by experimental results. In pattern recognition and from a model selection viewpoint, our procedure says that it is possible to select the most discriminating subset of variables by solving a very simple unconstrained objective function which in fact can be obtained with an explicit expression.

  7. Modelling Schumann resonances from ELF measurements using non-linear optimization methods

    NASA Astrophysics Data System (ADS)

    Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo

    2017-04-01

    Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained optimization methods applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as optimization process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The optimization methods that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different methods fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the optimization method; iii) Gradient method has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate method and Cuasi-Newton method give similar results (Newton method presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has been found.

  8. Meta-Analysis inside and outside Particle Physics: Convergence Using the Path of Least Resistance?

    ERIC Educational Resources Information Center

    Jackson, Dan; Baker, Rose

    2013-01-01

    In this note, we explain how the method proposed by Hartung and Knapp provides a compromise between conventional meta-analysis methodology and "unconstrained averaging", as used by the Particle Data Group.

  9. A wearable computing platform for developing cloud-based machine learning models for health monitoring applications.

    PubMed

    Patel, Shyamal; McGinnis, Ryan S; Silva, Ikaro; DiCristofaro, Steve; Mahadevan, Nikhil; Jortberg, Elise; Franco, Jaime; Martin, Albert; Lust, Joseph; Raj, Milan; McGrane, Bryan; DePetrillo, Paolo; Aranyosi, A J; Ceruolo, Melissa; Pindado, Jesus; Ghaffari, Roozbeh

    2016-08-01

    Wearable sensors have the potential to enable clinical-grade ambulatory health monitoring outside the clinic. Technological advances have enabled development of devices that can measure vital signs with great precision and significant progress has been made towards extracting clinically meaningful information from these devices in research studies. However, translating measurement accuracies achieved in the controlled settings such as the lab and clinic to unconstrained environments such as the home remains a challenge. In this paper, we present a novel wearable computing platform for unobtrusive collection of labeled datasets and a new paradigm for continuous development, deployment and evaluation of machine learning models to ensure robust model performance as we transition from the lab to home. Using this system, we train activity classification models across two studies and track changes in model performance as we go from constrained to unconstrained settings.

  10. Unconstrained tripolar implants for primary total hip arthroplasty in patients at risk for dislocation.

    PubMed

    Guyen, Olivier; Pibarot, Vincent; Vaz, Gualter; Chevillotte, Christophe; Carret, Jean-Paul; Bejui-Hugues, Jacques

    2007-09-01

    We performed a retrospective study on 167 primary total hip arthroplasty (THA) procedures in 163 patients at high risk for instability to assess the reliability of unconstrained tripolar implants (press-fit outer metal shell articulating a bipolar polyethylene component) in preventing dislocations. Eighty-four percent of the patients had at least 2 risk factors for dislocation. The mean follow-up length was 40.2 months. No dislocation was observed. Harris hip scores improved significantly. Six hips were revised, and no aseptic loosening of the cup was observed. The tripolar implant was extremely successful in achieving stability. However, because of the current lack of data documenting polyethylene wear at additional bearing, the routine use of tripolar implants in primary THA is discouraged and should be considered at the present time only for selected patients at high risk for dislocation and with limited activities.

  11. Direct brain recordings reveal hippocampal rhythm underpinnings of language processing.

    PubMed

    Piai, Vitória; Anderson, Kristopher L; Lin, Jack J; Dewar, Callum; Parvizi, Josef; Dronkers, Nina F; Knight, Robert T

    2016-10-04

    Language is classically thought to be supported by perisylvian cortical regions. Here we provide intracranial evidence linking the hippocampal complex to linguistic processing. We used direct recordings from the hippocampal structures to investigate whether theta oscillations, pivotal in memory function, track the amount of contextual linguistic information provided in sentences. Twelve participants heard sentences that were either constrained ("She locked the door with the") or unconstrained ("She walked in here with the") before presentation of the final word ("key"), shown as a picture that participants had to name. Hippocampal theta power increased for constrained relative to unconstrained contexts during sentence processing, preceding picture presentation. Our study implicates hippocampal theta oscillations in a language task using natural language associations that do not require memorization. These findings reveal that the hippocampal complex contributes to language in an active fashion, relating incoming words to stored semantic knowledge, a necessary process in the generation of sentence meaning.

  12. Beyond the group mind: a quantitative review of the interindividual-intergroup discontinuity effect.

    PubMed

    Wildschut, Tim; Pinter, Brad; Vevea, Jack L; Insko, Chester A; Schopler, John

    2003-09-01

    This quantitative review of 130 comparisons of interindividual and intergroup interactions in the context of mixed-motive situations reveals that intergroup interactions are generally more competitive than interindividual interactions. The authors identify 4 moderators of this interindividual-intergroup discontinuity effect, each based on the theoretical perspective that the discontinuity effect flows from greater fear and greed in intergroup relative to interindividual interactions. Results reveal that each moderator shares a unique association with the magnitude of the discontinuity effect. The discontinuity effect is larger when (a) participants interact with an opponent whose behavior is unconstrained by the experimenter or constrained by the experimenter to be cooperative rather than constrained by the experimenter to be reciprocal, (b) group members make a group decision rather than individual decisions, (c) unconstrained communication between participants is present rather than absent, and (d) conflict of interest is severe rather than mild.

  13. Single Crystals Grown Under Unconstrained Conditions

    NASA Astrophysics Data System (ADS)

    Sunagawa, Ichiro

    Based on detailed investigations on morphology (evolution and variation in external forms), surface microtopography of crystal faces (spirals and etch figures), internal morphology (growth sectors, growth banding and associated impurity partitioning) and perfection (dislocations and other lattice defects) in single crystals, we can deduce how and by what mechanism the crystal grew and experienced fluctuation in growth parameters through its growth and post-growth history under unconstrained condition. The information is useful not only in finding appropriate way to growing highly perfect and homogeneous single crystals, but also in deciphering letters sent from the depth of the Earth and the Space. It is also useful in discriminating synthetic from natural gemstones. In this chapter, available methods to obtain molecular information are briefly summarized, and actual examples to demonstrate the importance of this type of investigations are selected from both natural minerals (diamond, quartz, hematite, corundum, beryl, phlogopite) and synthetic crystals (SiC, diamond, corundum, beryl).

  14. Multiobjective optimization of temporal processes.

    PubMed

    Song, Zhe; Kusiak, Andrew

    2010-06-01

    This paper presents a dynamic predictive-optimization framework of a nonlinear temporal process. Data-mining (DM) and evolutionary strategy algorithms are integrated in the framework for solving the optimization model. DM algorithms learn dynamic equations from the process data. An evolutionary strategy algorithm is then applied to solve the optimization problem guided by the knowledge extracted by the DM algorithm. The concept presented in this paper is illustrated with the data from a power plant, where the goal is to maximize the boiler efficiency and minimize the limestone consumption. This multiobjective optimization problem can be either transformed into a single-objective optimization problem through preference aggregation approaches or into a Pareto-optimal optimization problem. The computational results have shown the effectiveness of the proposed optimization framework.

  15. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  16. Computational approaches to motor learning by imitation.

    PubMed Central

    Schaal, Stefan; Ijspeert, Auke; Billard, Aude

    2003-01-01

    Movement imitation requires a complex set of mechanisms that map an observed movement of a teacher onto one's own movement apparatus. Relevant problems include movement recognition, pose estimation, pose tracking, body correspondence, coordinate transformation from external to egocentric space, matching of observed against previously learned movement, resolution of redundant degrees-of-freedom that are unconstrained by the observation, suitable movement representations for imitation, modularization of motor control, etc. All of these topics by themselves are active research problems in computational and neurobiological sciences, such that their combination into a complete imitation system remains a daunting undertaking-indeed, one could argue that we need to understand the complete perception-action loop. As a strategy to untangle the complexity of imitation, this paper will examine imitation purely from a computational point of view, i.e. we will review statistical and mathematical approaches that have been suggested for tackling parts of the imitation problem, and discuss their merits, disadvantages and underlying principles. Given the focus on action recognition of other contributions in this special issue, this paper will primarily emphasize the motor side of imitation, assuming that a perceptual system has already identified important features of a demonstrated movement and created their corresponding spatial information. Based on the formalization of motor control in terms of control policies and their associated performance criteria, useful taxonomies of imitation learning can be generated that clarify different approaches and future research directions. PMID:12689379

  17. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1989-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  18. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1990-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  19. Weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1991-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  20. Optimality conditions for the numerical solution of optimization problems with PDE constraints :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro; Ridzal, Denis

    2014-03-01

    A theoretical framework for the numerical solution of partial di erential equation (PDE) constrained optimization problems is presented in this report. This theoretical framework embodies the fundamental infrastructure required to e ciently implement and solve this class of problems. Detail derivations of the optimality conditions required to accurately solve several parameter identi cation and optimal control problems are also provided in this report. This will allow the reader to further understand how the theoretical abstraction presented in this report translates to the application.

  1. FRANOPP: Framework for analysis and optimization problems user's guide

    NASA Technical Reports Server (NTRS)

    Riley, K. M.

    1981-01-01

    Framework for analysis and optimization problems (FRANOPP) is a software aid for the study and solution of design (optimization) problems which provides the driving program and plotting capability for a user generated programming system. In addition to FRANOPP, the programming system also contains the optimization code CONMIN, and two user supplied codes, one for analysis and one for output. With FRANOPP the user is provided with five options for studying a design problem. Three of the options utilize the plot capability and present an indepth study of the design problem. The study can be focused on a history of the optimization process or on the interaction of variables within the design problem.

  2. Effect of the mandible on mouthguard measurements of head kinematics.

    PubMed

    Kuo, Calvin; Wu, Lyndia C; Hammoor, Brad T; Luck, Jason F; Cutcliffe, Hattie C; Lynall, Robert C; Kait, Jason R; Campbell, Kody R; Mihalik, Jason P; Bass, Cameron R; Camarillo, David B

    2016-06-14

    Wearable sensors are becoming increasingly popular for measuring head motions and detecting head impacts. Many sensors are worn on the skin or in headgear and can suffer from motion artifacts introduced by the compliance of soft tissue or decoupling of headgear from the skull. The instrumented mouthguard is designed to couple directly to the upper dentition, which is made of hard enamel and anchored in a bony socket by stiff ligaments. This gives the mouthguard superior coupling to the skull compared with other systems. However, multiple validation studies have yielded conflicting results with respect to the mouthguard׳s head kinematics measurement accuracy. Here, we demonstrate that imposing different constraints on the mandible (lower jaw) can alter mouthguard kinematic accuracy in dummy headform testing. In addition, post mortem human surrogate tests utilizing the worst-case unconstrained mandible condition yield 40% and 80% normalized root mean square error in angular velocity and angular acceleration respectively. These errors can be modeled using a simple spring-mass system in which the soft mouthguard material near the sensors acts as a spring and the mandible as a mass. However, the mouthguard can be designed to mitigate these disturbances by isolating sensors from mandible loads, improving accuracy to below 15% normalized root mean square error in all kinematic measures. Thus, while current mouthguards would suffer from measurement errors in the worst-case unconstrained mandible condition, future mouthguards should be designed to account for these disturbances and future validation testing should include unconstrained mandibles to ensure proper accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Oxygen Desaturation Index Estimation through Unconstrained Cardiac Sympathetic Activity Assessment Using Three Ballistocardiographic Systems.

    PubMed

    Jung, Da Woon; Hwang, Su Hwan; Lee, Yu Jin; Jeong, Do-Un; Park, Kwang Suk

    2016-01-01

    Nocturnal hypoxemia, characterized by abnormally low oxygen saturation levels in arterial blood during sleep, is a significant feature of various pathological conditions. The oxygen desaturation index, commonly used to evaluate the nocturnal hypoxemia severity, is acquired using nocturnal pulse oximetry that requires the overnight wear of a pulse oximeter probe. This study aimed to suggest a method for the unconstrained estimation of the oxygen desaturation index. We hypothesized that the severity of nocturnal hypoxemia would be positively associated with cardiac sympathetic activation during sleep. Unconstrained heart rate variability monitoring was conducted using three different ballistocardiographic systems to assess cardiac sympathetic activity. Overnight polysomnographic and ballistocardiographic recording pairs were collected from the 20 non-nocturnal hypoxemia (oxygen desaturation index <5 events/h) subjects and the 76 nocturnal hypoxemia patients. Among the 96 recording pairs, 48 were used as training data and the remaining 48 as test data. The regression analysis, performed using the low-frequency component of heart rate variability, exhibited a root mean square error of 3.33 events/h between the estimates and the reference values of the oxygen desaturation index. The nocturnal hypoxemia diagnostic performance produced by our method was presented with an average accuracy of 96.5% at oxygen desaturation index cutoffs of ≥5, 15, and 30 events/h. Our method has the potential to serve as a complementary measure against the accidental slip-out of a pulse oximeter probe during nocturnal pulse oximetry. The independent application of our method could facilitate home-based long-term oxygen desaturation index monitoring. © 2016 S. Karger AG, Basel.

  4. An apparent contradiction: increasing variability to achieve greater precision?

    PubMed

    Rosenblatt, Noah J; Hurt, Christopher P; Latash, Mark L; Grabiner, Mark D

    2014-02-01

    To understand the relationship between variability of foot placement in the frontal plane and stability of gait patterns, we explored how constraining mediolateral foot placement during walking affects the structure of kinematic variance in the lower-limb configuration space during the swing phase of gait. Ten young subjects walked under three conditions: (1) unconstrained (normal walking), (2) constrained (walking overground with visual guides for foot placement to achieve the measured unconstrained step width) and, (3) beam (walking on elevated beams spaced to achieve the measured unconstrained step width). The uncontrolled manifold analysis of the joint configuration variance was used to quantify two variance components, one that did not affect the mediolateral trajectory of the foot in the frontal plane ("good variance") and one that affected this trajectory ("bad variance"). Based on recent studies, we hypothesized that across conditions (1) the index of the synergy stabilizing the mediolateral trajectory of the foot (the normalized difference between the "good variance" and "bad variance") would systematically increase and (2) the changes in the synergy index would be associated with a disproportionate increase in the "good variance." Both hypotheses were confirmed. We conclude that an increase in the "good variance" component of the joint configuration variance may be an effective method of ensuring high stability of gait patterns during conditions requiring increased control of foot placement, particularly if a postural threat is present. Ultimately, designing interventions that encourage a larger amount of "good variance" may be a promising method of improving stability of gait patterns in populations such as older adults and neurological patients.

  5. Mobile EEG on the bike: disentangling attentional and physical contributions to auditory attention tasks

    NASA Astrophysics Data System (ADS)

    Zink, Rob; Hunyadi, Borbála; Van Huffel, Sabine; De Vos, Maarten

    2016-08-01

    Objective. In the past few years there has been a growing interest in studying brain functioning in natural, real-life situations. Mobile EEG allows to study the brain in real unconstrained environments but it faces the intrinsic challenge that it is impossible to disentangle observed changes in brain activity due to increase in cognitive demands by the complex natural environment or due to the physical involvement. In this work we aim to disentangle the influence of cognitive demands and distractions that arise from such outdoor unconstrained recordings. Approach. We evaluate the ERP and single trial characteristics of a three-class auditory oddball paradigm recorded in outdoor scenario’s while peddling on a fixed bike or biking freely around. In addition we also carefully evaluate the trial specific motion artifacts through independent gyro measurements and control for muscle artifacts. Main results. A decrease in P300 amplitude was observed in the free biking condition as compared to the fixed bike conditions. Above chance P300 single-trial classification in highly dynamic real life environments while biking outdoors was achieved. Certain significant artifact patterns were identified in the free biking condition, but neither these nor the increase in movement (as derived from continuous gyrometer measurements) can explain the differences in classification accuracy and P300 waveform differences with full clarity. The increased cognitive load in real-life scenarios is shown to play a major role in the observed differences. Significance. Our findings suggest that auditory oddball results measured in natural real-life scenarios are influenced mainly by increased cognitive load due to being in an unconstrained environment.

  6. Mobile EEG on the bike: disentangling attentional and physical contributions to auditory attention tasks.

    PubMed

    Zink, Rob; Hunyadi, Borbála; Huffel, Sabine Van; Vos, Maarten De

    2016-08-01

    In the past few years there has been a growing interest in studying brain functioning in natural, real-life situations. Mobile EEG allows to study the brain in real unconstrained environments but it faces the intrinsic challenge that it is impossible to disentangle observed changes in brain activity due to increase in cognitive demands by the complex natural environment or due to the physical involvement. In this work we aim to disentangle the influence of cognitive demands and distractions that arise from such outdoor unconstrained recordings. We evaluate the ERP and single trial characteristics of a three-class auditory oddball paradigm recorded in outdoor scenario's while peddling on a fixed bike or biking freely around. In addition we also carefully evaluate the trial specific motion artifacts through independent gyro measurements and control for muscle artifacts. A decrease in P300 amplitude was observed in the free biking condition as compared to the fixed bike conditions. Above chance P300 single-trial classification in highly dynamic real life environments while biking outdoors was achieved. Certain significant artifact patterns were identified in the free biking condition, but neither these nor the increase in movement (as derived from continuous gyrometer measurements) can explain the differences in classification accuracy and P300 waveform differences with full clarity. The increased cognitive load in real-life scenarios is shown to play a major role in the observed differences. Our findings suggest that auditory oddball results measured in natural real-life scenarios are influenced mainly by increased cognitive load due to being in an unconstrained environment.

  7. Mind your step: metabolic energy cost while walking an enforced gait pattern.

    PubMed

    Wezenberg, D; de Haan, A; van Bennekom, C A M; Houdijk, H

    2011-04-01

    The energy cost of walking could be attributed to energy related to the walking movement and energy related to balance control. In order to differentiate between both components we investigated the energy cost of walking an enforced step pattern, thereby perturbing balance while the walking movement is preserved. Nine healthy subjects walked three times at comfortable walking speed on an instrumented treadmill. The first trial consisted of unconstrained walking. In the next two trials, subject walked while following a step pattern projected on the treadmill. The steps projected were either composed of the averaged step characteristics (periodic trial), or were an exact copy including the variability of the steps taken while walking unconstrained (variable trial). Metabolic energy cost was assessed and center of pressure profiles were analyzed to determine task performance, and to gain insight into the balance control strategies applied. Results showed that the metabolic energy cost was significantly higher in both the periodic and variable trial (8% and 13%, respectively) compared to unconstrained walking. The variation in center of pressure trajectories during single limb support was higher when a gait pattern was enforced, indicating a more active ankle strategy. The increased metabolic energy cost could originate from increased preparatory muscle activation to ensure proper foot placement and a more active ankle strategy to control for lateral balance. These results entail that metabolic energy cost of walking can be influenced significantly by control strategies that do not necessary alter global gait characteristics. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.

  9. Load balancing and closed chain multiple arm control

    NASA Technical Reports Server (NTRS)

    Kreutz, Kenneth; Lokshin, Anatole

    1988-01-01

    The authors give the general dynamical equations for several rigid link manipulators rigidly grasping a commonly held rigid object. It is shown that the number of arm-configuration degrees of freedom lost due to imposing the closed-loop kinematic constraints is the same as the number of degrees of freedom gained for controlling the internal forces of the closed-chain system. This number is equal to the dimension of the kernel of the Jacobian operator which transforms contact forces to the net forces acting on the held object, and it is shown that this kernel can be identified with the subspace of controllable internal forces of the closed-chain system. Control of these forces makes it possible to regulate the grasping forces imparted to the held object or to control the load taken by each arm. It is shown that the internal forces can be influenced without affecting the control of the configuration degrees of freedom. Control laws of the feedback linearization type are shown to be useful for controlling the location and attitude of a frame fixed with respect to the held object, while simultaneously controlling the internal forces of the closed-chain system. Force feedback can be used to linearize and control the system even when the held object has unknown mass properties. If saturation effects are ignored, an unconstrained quadratic optimization can be performed to distribute the load optimally among the joint actuators.

  10. A hybrid optimization algorithm to explore atomic configurations of TiO 2 nanoparticles

    DOE PAGES

    Inclan, Eric J.; Geohegan, David B.; Yoon, Mina

    2017-10-17

    Here in this paper we present a hybrid algorithm comprised of differential evolution, coupled with the Broyden–Fletcher–Goldfarb–Shanno quasi-Newton optimization algorithm, for the purpose of identifying a broad range of (meta)stable Ti nO 2n nanoparticles, as an example system, described by Buckingham interatomic potential. The potential and its gradient are modified to be piece-wise continuous to enable use of these continuous-domain, unconstrained algorithms, thereby improving compatibility. To measure computational effectiveness a regression on known structures is used. This approach defines effectiveness as the ability of an algorithm to produce a set of structures whose energy distribution follows the regression as themore » number of Ti nO 2n increases such that the shape of the distribution is consistent with the algorithm’s stated goals. Our calculation demonstrates that the hybrid algorithm finds global minimum configurations more effectively than the differential evolution algorithms, widely employed in the field of materials science. Specifically, the hybrid algorithm is shown to reproduce the global minimum energy structures reported in the literature up to n = 5, and retains good agreement with the regression up to n = 25. For 25 < n < 100, where literature structures are unavailable, the hybrid effectively obtains structures that are in lower energies per TiO 2 unit as the system size increases.« less

  11. Prediction of human gait trajectories during the SSP using a neuromusculoskeletal modeling: A challenge for parametric optimization.

    PubMed

    Seyed, Mohammadali Rahmati; Mostafa, Rostami; Borhan, Beigzadeh

    2018-04-27

    The parametric optimization techniques have been widely employed to predict human gait trajectories; however, their applications to reveal the other aspects of gait are questionable. The aim of this study is to investigate whether or not the gait prediction model is able to justify the movement trajectories for the higher average velocities. A planar, seven-segment model with sixteen muscle groups was used to represent human neuro-musculoskeletal dynamics. At first, the joint angles, ground reaction forces (GRFs) and muscle activations were predicted and validated for normal average velocity (1.55 m/s) in the single support phase (SSP) by minimizing energy expenditure, which is subject to the non-linear constraints of the gait. The unconstrained system dynamics of extended inverse dynamics (USDEID) approach was used to estimate muscle activations. Then by scaling time and applying the same procedure, the movement trajectories were predicted for higher average velocities (from 2.07 m/s to 4.07 m/s) and compared to the pattern of movement with fast walking speed. The comparison indicated a high level of compatibility between the experimental and predicted results, except for the vertical position of the center of gravity (COG). It was concluded that the gait prediction model can be effectively used to predict gait trajectories for higher average velocities.

  12. Dynamic Modeling, Model-Based Control, and Optimization of Solid Oxide Fuel Cells

    NASA Astrophysics Data System (ADS)

    Spivey, Benjamin James

    2011-07-01

    Solid oxide fuel cells are a promising option for distributed stationary power generation that offers efficiencies ranging from 50% in stand-alone applications to greater than 80% in cogeneration. To advance SOFC technology for widespread market penetration, the SOFC should demonstrate improved cell lifetime and load-following capability. This work seeks to improve lifetime through dynamic analysis of critical lifetime variables and advanced control algorithms that permit load-following while remaining in a safe operating zone based on stress analysis. Control algorithms typically have addressed SOFC lifetime operability objectives using unconstrained, single-input-single-output control algorithms that minimize thermal transients. Existing SOFC controls research has not considered maximum radial thermal gradients or limits on absolute temperatures in the SOFC. In particular, as stress analysis demonstrates, the minimum cell temperature is the primary thermal stress driver in tubular SOFCs. This dissertation presents a dynamic, quasi-two-dimensional model for a high-temperature tubular SOFC combined with ejector and prereformer models. The model captures dynamics of critical thermal stress drivers and is used as the physical plant for closed-loop control simulations. A constrained, MIMO model predictive control algorithm is developed and applied to control the SOFC. Closed-loop control simulation results demonstrate effective load-following, constraint satisfaction for critical lifetime variables, and disturbance rejection. Nonlinear programming is applied to find the optimal SOFC size and steady-state operating conditions to minimize total system costs.

  13. Wireless Sensor Network Optimization: Multi-Objective Paradigm.

    PubMed

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-07-20

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks.

  14. Improvements in GRACE Gravity Fields Using Regularization

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or spatial smoothing.

  15. Incoherent Fermi-Pasta-Ulam Recurrences and Unconstrained Thermalization Mediated by Strong Phase Correlations

    NASA Astrophysics Data System (ADS)

    Guasoni, M.; Garnier, J.; Rumpf, B.; Sugny, D.; Fatome, J.; Amrani, F.; Millot, G.; Picozzi, A.

    2017-01-01

    The long-standing and controversial Fermi-Pasta-Ulam problem addresses fundamental issues of statistical physics, and the attempt to resolve the mystery of the recurrences has led to many great discoveries, such as chaos, integrable systems, and soliton theory. From a general perspective, the recurrence is commonly considered as a coherent phase-sensitive effect that originates in the property of integrability of the system. In contrast to this interpretation, we show that convection among a pair of waves is responsible for a new recurrence phenomenon that takes place for strongly incoherent waves far from integrability. We explain the incoherent recurrence by developing a nonequilibrium spatiotemporal kinetic formulation that accounts for the existence of phase correlations among incoherent waves. The theory reveals that the recurrence originates in a novel form of modulational instability, which shows that strongly correlated fluctuations are spontaneously created among the random waves. Contrary to conventional incoherent modulational instabilities, we find that Landau damping can be completely suppressed, which unexpectedly removes the threshold of the instability. Consequently, the recurrence can take place for strongly incoherent waves and is thus characterized by a reduction of nonequilibrium entropy that violates the H theorem of entropy growth. In its long-term evolution, the system enters a secondary turbulent regime characterized by an irreversible process of relaxation to equilibrium. At variance with the expected thermalization described by standard Gibbsian statistical mechanics, our thermalization process is not dictated by the usual constraints of energy and momentum conservation: The inverse temperatures associated with energy and momentum are zero. This unveils a previously unrecognized scenario of unconstrained thermalization, which is relevant to a variety of weakly dispersive wave systems. Our work should stimulate the development of new experiments aimed at observing recurrence behaviors with random waves. From a broader perspective, the spatiotemporal kinetic formulation we develop here paves the way to the study of novel forms of global incoherent collective behaviors in wave turbulence, such as the formation of incoherent breather structures.

  16. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  17. A Kind of Nonlinear Programming Problem Based on Mixed Fuzzy Relation Equations Constraints

    NASA Astrophysics Data System (ADS)

    Li, Jinquan; Feng, Shuang; Mi, Honghai

    In this work, a kind of nonlinear programming problem with non-differential objective function and under the constraints expressed by a system of mixed fuzzy relation equations is investigated. First, some properties of this kind of optimization problem are obtained. Then, a polynomial-time algorithm for this kind of optimization problem is proposed based on these properties. Furthermore, we show that this algorithm is optimal for the considered optimization problem in this paper. Finally, numerical examples are provided to illustrate our algorithms.

  18. In Praise of Ignorance

    ERIC Educational Resources Information Center

    Formica, Piero

    2014-01-01

    In this article Piero Formica examines the difference between incremental and revolutionary innovation, distinguishing between the constrained "path finders" and the unconstrained "path creators". He argues that an acceptance of "ignorance" and a willingness to venture into the unknown are critical elements in…

  19. A proportional control scheme for high density force myography.

    PubMed

    Belyea, Alexander T; Englehart, Kevin B; Scheme, Erik J

    2018-08-01

    Force myography (FMG) has been shown to be a potentially higher accuracy alternative to electromyography for pattern recognition based prosthetic control. Classification accuracy, however, is just one factor that affects the usability of a control system. Others, like the ability to start and stop, to coordinate dynamic movements, and to control the velocity of the device through some proportional control scheme can be of equal importance. To impart effective fine control using FMG-based pattern recognition, it is important that a method of controlling the velocity of each motion be developed. In this work force myography data were collected from 14 able bodied participants and one amputee participant as they performed a set of wrist and hand motions. The offline proportional control performance of a standard mean signal amplitude approach and a proposed regression-based alternative was compared. The impact of providing feedback during training, as well as the use of constrained or unconstrained hand and wrist contractions, were also evaluated. It is shown that the commonly used mean of rectified channel amplitudes approach commonly employed with electromyography does not translate to force myography. The proposed class-based regression proportional control approach is shown significantly outperform this standard approach (ρ  <  0.001), yielding a R 2 correlation coefficients of 0.837 and 0.830 for constrained and unconstrained forearm contractions, respectively for able bodied participants. No significant difference (ρ  =  0.693) was found in R 2 performance when feedback was provided during training or not. The amputee subject achieved a classification accuracy of 83.4%  ±  3.47% demonstrating the ability to distinguish contractions well with FMG. In proportional control the amputee participant achieved an R 2 of of 0.375 for regression based proportional control during unconstrained contractions. This is lower than the unconstrained case for able-bodied subjects for this particular amputee, possibly due to difficultly in visualizing contraction level modulation without feedback. This may be remedied in the use of a prosthetic limb that would provide real-time feedback in the form of device speed. A novel class-specific regression-based approach is proposed for multi-class control is described and shown to provide an effective means of providing FMG-based proportional control.

  20. Fast Optimization for Aircraft Descent and Approach Trajectory

    NASA Technical Reports Server (NTRS)

    Luchinsky, Dmitry G.; Schuet, Stefan; Brenton, J.; Timucin, Dogan; Smith, David; Kaneshige, John

    2017-01-01

    We address problem of on-line scheduling of the aircraft descent and approach trajectory. We formulate a general multiphase optimal control problem for optimization of the descent trajectory and review available methods of its solution. We develop a fast algorithm for solution of this problem using two key components: (i) fast inference of the dynamical and control variables of the descending trajectory from the low dimensional flight profile data and (ii) efficient local search for the resulting reduced dimensionality non-linear optimization problem. We compare the performance of the proposed algorithm with numerical solution obtained using optimal control toolbox General Pseudospectral Optimal Control Software. We present results of the solution of the scheduling problem for aircraft descent using novel fast algorithm and discuss its future applications.

  1. Research on cutting path optimization of sheet metal parts based on ant colony algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Z. Y.; Ling, H.; Li, L.; Wu, L. H.; Liu, N. B.

    2017-09-01

    In view of the disadvantages of the current cutting path optimization methods of sheet metal parts, a new method based on ant colony algorithm was proposed in this paper. The cutting path optimization problem of sheet metal parts was taken as the research object. The essence and optimization goal of the optimization problem were presented. The traditional serial cutting constraint rule was improved. The cutting constraint rule with cross cutting was proposed. The contour lines of parts were discretized and the mathematical model of cutting path optimization was established. Thus the problem was converted into the selection problem of contour lines of parts. Ant colony algorithm was used to solve the problem. The principle and steps of the algorithm were analyzed.

  2. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  3. Application of the gravity search algorithm to multi-reservoir operation optimization

    NASA Astrophysics Data System (ADS)

    Bozorg-Haddad, Omid; Janbaz, Mahdieh; Loáiciga, Hugo A.

    2016-12-01

    Complexities in river discharge, variable rainfall regime, and drought severity merit the use of advanced optimization tools in multi-reservoir operation. The gravity search algorithm (GSA) is an evolutionary optimization algorithm based on the law of gravity and mass interactions. This paper explores the GSA's efficacy for solving benchmark functions, single reservoir, and four-reservoir operation optimization problems. The GSA's solutions are compared with those of the well-known genetic algorithm (GA) in three optimization problems. The results show that the GSA's results are closer to the optimal solutions than the GA's results in minimizing the benchmark functions. The average values of the objective function equal 1.218 and 1.746 with the GSA and GA, respectively, in solving the single-reservoir hydropower operation problem. The global solution equals 1.213 for this same problem. The GSA converged to 99.97% of the global solution in its average-performing history, while the GA converged to 97% of the global solution of the four-reservoir problem. Requiring fewer parameters for algorithmic implementation and reaching the optimal solution in fewer number of functional evaluations are additional advantages of the GSA over the GA. The results of the three optimization problems demonstrate a superior performance of the GSA for optimizing general mathematical problems and the operation of reservoir systems.

  4. Discrete particle swarm optimization to solve multi-objective limited-wait hybrid flow shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Santosa, B.; Siswanto, N.; Fiqihesa

    2018-04-01

    This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution

  5. Constraint Optimization Literature Review

    DTIC Science & Technology

    2015-11-01

    COPs. 15. SUBJECT TERMS high-performance computing, mobile ad hoc network, optimization, constraint, satisfaction 16. SECURITY CLASSIFICATION OF: 17...Optimization Problems 1 2.1 Constraint Satisfaction Problems 1 2.2 Constraint Optimization Problems 3 3. Constraint Optimization Algorithms 9 3.1...Constraint Satisfaction Algorithms 9 3.1.1 Brute-Force search 9 3.1.2 Constraint Propagation 10 3.1.3 Depth-First Search 13 3.1.4 Local Search 18

  6. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  7. Multiobjective optimization approach: thermal food processing.

    PubMed

    Abakarov, A; Sushkov, Y; Almonacid, S; Simpson, R

    2009-01-01

    The objective of this study was to utilize a multiobjective optimization technique for the thermal sterilization of packaged foods. The multiobjective optimization approach used in this study is based on the optimization of well-known aggregating functions by an adaptive random search algorithm. The applicability of the proposed approach was illustrated by solving widely used multiobjective test problems taken from the literature. The numerical results obtained for the multiobjective test problems and for the thermal processing problem show that the proposed approach can be effectively used for solving multiobjective optimization problems arising in the food engineering field.

  8. Replica analysis for the duality of the portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  9. Replica analysis for the duality of the portfolio optimization problem.

    PubMed

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  10. The coral reefs optimization algorithm: a novel metaheuristic for efficiently solving optimization problems.

    PubMed

    Salcedo-Sanz, S; Del Ser, J; Landa-Torres, I; Gil-López, S; Portilla-Figueras, J A

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems.

  11. The Coral Reefs Optimization Algorithm: A Novel Metaheuristic for Efficiently Solving Optimization Problems

    PubMed Central

    Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J. A.

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860

  12. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    NASA Astrophysics Data System (ADS)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.

  13. Wireless Sensor Network Optimization: Multi-Objective Paradigm

    PubMed Central

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-01-01

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks. PMID:26205271

  14. Cost component analysis.

    PubMed

    Lörincz, András; Póczos, Barnabás

    2003-06-01

    In optimizations the dimension of the problem may severely, sometimes exponentially increase optimization time. Parametric function approximatiors (FAPPs) have been suggested to overcome this problem. Here, a novel FAPP, cost component analysis (CCA) is described. In CCA, the search space is resampled according to the Boltzmann distribution generated by the energy landscape. That is, CCA converts the optimization problem to density estimation. Structure of the induced density is searched by independent component analysis (ICA). The advantage of CCA is that each independent ICA component can be optimized separately. In turn, (i) CCA intends to partition the original problem into subproblems and (ii) separating (partitioning) the original optimization problem into subproblems may serve interpretation. Most importantly, (iii) CCA may give rise to high gains in optimization time. Numerical simulations illustrate the working of the algorithm.

  15. The design of multirate digital control systems

    NASA Technical Reports Server (NTRS)

    Berg, M. C.

    1986-01-01

    The successive loop closures synthesis method is the only method for multirate (MR) synthesis in common use. A new method for MR synthesis is introduced which requires a gradient-search solution to a constrained optimization problem. Some advantages of this method are that the control laws for all control loops are synthesized simultaneously, taking full advantage of all cross-coupling effects, and that simple, low-order compensator structures are easily accomodated. The algorithm and associated computer program for solving the constrained optimization problem are described. The successive loop closures , optimal control, and constrained optimization synthesis methods are applied to two example design problems. A series of compensator pairs are synthesized for each example problem. The succesive loop closure, optimal control, and constrained optimization synthesis methods are compared, in the context of the two design problems.

  16. The optimal location of piezoelectric actuators and sensors for vibration control of plates

    NASA Astrophysics Data System (ADS)

    Kumar, K. Ramesh; Narayanan, S.

    2007-12-01

    This paper considers the optimal placement of collocated piezoelectric actuator-sensor pairs on a thin plate using a model-based linear quadratic regulator (LQR) controller. LQR performance is taken as objective for finding the optimal location of sensor-actuator pairs. The problem is formulated using the finite element method (FEM) as multi-input-multi-output (MIMO) model control. The discrete optimal sensor and actuator location problem is formulated in the framework of a zero-one optimization problem. A genetic algorithm (GA) is used to solve the zero-one optimization problem. Different classical control strategies like direct proportional feedback, constant-gain negative velocity feedback and the LQR optimal control scheme are applied to study the control effectiveness.

  17. Exploring the quantum speed limit with computer games

    NASA Astrophysics Data System (ADS)

    Sørensen, Jens Jakob W. H.; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F.

    2016-04-01

    Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. ‘Gamification’—the application of game elements in a non-game context—is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.

  18. Exploring the quantum speed limit with computer games.

    PubMed

    Sørensen, Jens Jakob W H; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F

    2016-04-14

    Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. 'Gamification'--the application of game elements in a non-game context--is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.

  19. Data Understanding Applied to Optimization

    NASA Technical Reports Server (NTRS)

    Buntine, Wray; Shilman, Michael

    1998-01-01

    The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.

  20. Multiobjective Optimization Using a Pareto Differential Evolution Approach

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.

  1. On a distinctive feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets

    NASA Astrophysics Data System (ADS)

    Trifonenkov, A. V.; Trifonenkov, V. P.

    2017-01-01

    This article deals with a feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets. The operation of a nuclear reactor during threatened period is considered. The optimal control search problem is analysed. The xenon poisoning causes limitations on the variety of statements of the problem of calculating time-average characteristics of a set of optimal reactor power off controls. The level of xenon poisoning is limited. There is a problem of choosing an appropriate segment of the time axis to ensure that optimal control problem is consistent. Two procedures of estimation of the duration of this segment are considered. Two estimations as functions of the xenon limitation were plot. Boundaries of the interval of averaging are defined more precisely.

  2. Robust optimization modelling with applications to industry and environmental problems

    NASA Astrophysics Data System (ADS)

    Chaerani, Diah; Dewanto, Stanley P.; Lesmana, Eman

    2017-10-01

    Robust Optimization (RO) modeling is one of the existing methodology for handling data uncertainty in optimization problem. The main challenge in this RO methodology is how and when we can reformulate the robust counterpart of uncertain problems as a computationally tractable optimization problem or at least approximate the robust counterpart by a tractable problem. Due to its definition the robust counterpart highly depends on how we choose the uncertainty set. As a consequence we can meet this challenge only if this set is chosen in a suitable way. The development on RO grows fast, since 2004, a new approach of RO called Adjustable Robust Optimization (ARO) is introduced to handle uncertain problems when the decision variables must be decided as a ”wait and see” decision variables. Different than the classic Robust Optimization (RO) that models decision variables as ”here and now”. In ARO, the uncertain problems can be considered as a multistage decision problem, thus decision variables involved are now become the wait and see decision variables. In this paper we present the applications of both RO and ARO. We present briefly all results to strengthen the importance of RO and ARO in many real life problems.

  3. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  4. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  5. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  6. Design and multi-physics optimization of rotary MRF brakes

    NASA Astrophysics Data System (ADS)

    Topcu, Okan; Taşcıoğlu, Yiğit; Konukseven, Erhan İlhan

    2018-03-01

    Particle swarm optimization (PSO) is a popular method to solve the optimization problems. However, calculations for each particle will be excessive when the number of particles and complexity of the problem increases. As a result, the execution speed will be too slow to achieve the optimized solution. Thus, this paper proposes an automated design and optimization method for rotary MRF brakes and similar multi-physics problems. A modified PSO algorithm is developed for solving multi-physics engineering optimization problems. The difference between the proposed method and the conventional PSO is to split up the original single population into several subpopulations according to the division of labor. The distribution of tasks and the transfer of information to the next party have been inspired by behaviors of a hunting party. Simulation results show that the proposed modified PSO algorithm can overcome the problem of heavy computational burden of multi-physics problems while improving the accuracy. Wire type, MR fluid type, magnetic core material, and ideal current inputs have been determined by the optimization process. To the best of the authors' knowledge, this multi-physics approach is novel for optimizing rotary MRF brakes and the developed PSO algorithm is capable of solving other multi-physics engineering optimization problems. The proposed method has showed both better performance compared to the conventional PSO and also has provided small, lightweight, high impedance rotary MRF brake designs.

  7. Algorithmic Perspectives on Problem Formulations in MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.

  8. A general optimality criteria algorithm for a class of engineering optimization problems

    NASA Astrophysics Data System (ADS)

    Belegundu, Ashok D.

    2015-05-01

    An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.

  9. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan

    2012-01-01

    The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474

  10. Wind Farm Turbine Type and Placement Optimization

    NASA Astrophysics Data System (ADS)

    Graf, Peter; Dykes, Katherine; Scott, George; Fields, Jason; Lunacek, Monte; Quick, Julian; Rethore, Pierre-Elouan

    2016-09-01

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. This document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  11. Wind farm turbine type and placement optimization

    DOE PAGES

    Graf, Peter; Dykes, Katherine; Scott, George; ...

    2016-10-03

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  12. Gravity inversion of a fault by Particle swarm optimization (PSO).

    PubMed

    Toushmalani, Reza

    2013-01-01

    Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.

  13. Post-Optimality Analysis In Aerospace Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.

    1993-01-01

    This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.

  14. Will farmers save water? A theoretical analysis of groundwater conservation policies

    USDA-ARS?s Scientific Manuscript database

    The development of agricultural irrigation systems has generated significant increases in food production and farm income. However, unplanned and unconstrained groundwater use could also cause serious consequences. To extend the economic life of groundwater, water conservation issues have become the...

  15. Defending Collegiality

    ERIC Educational Resources Information Center

    Fischer, Michael

    2009-01-01

    In his provocatively titled recent book, "The No Asshole Rule: Building a Civilized Workplace and Surviving One That Isn't", Robert I. Sutton argues for zero tolerance of "bullies, creeps, jerks, weasels, tormentors, tyrants, serial slammers, despots, [and] unconstrained egomaniacs" in the workplace. These individuals systematically prey on their…

  16. Electric train energy consumption modeling

    DOE PAGES

    Wang, Jinghui; Rakha, Hesham A.

    2017-05-01

    For this paper we develop an electric train energy consumption modeling framework considering instantaneous regenerative braking efficiency in support of a rail simulation system. The model is calibrated with data from Portland, Oregon using an unconstrained non-linear optimization procedure, and validated using data from Chicago, Illinois by comparing model predictions against the National Transit Database (NTD) estimates. The results demonstrate that regenerative braking efficiency varies as an exponential function of the deceleration level, rather than an average constant as assumed in previous studies. The model predictions are demonstrated to be consistent with the NTD estimates, producing a predicted error ofmore » 1.87% and -2.31%. The paper demonstrates that energy recovery reduces the overall power consumption by 20% for the tested Chicago route. Furthermore, the paper demonstrates that the proposed modeling approach is able to capture energy consumption differences associated with train, route and operational parameters, and thus is applicable for project-level analysis. The model can be easily implemented in traffic simulation software, used in smartphone applications and eco-transit programs given its fast execution time and easy integration in complex frameworks.« less

  17. Electric train energy consumption modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jinghui; Rakha, Hesham A.

    For this paper we develop an electric train energy consumption modeling framework considering instantaneous regenerative braking efficiency in support of a rail simulation system. The model is calibrated with data from Portland, Oregon using an unconstrained non-linear optimization procedure, and validated using data from Chicago, Illinois by comparing model predictions against the National Transit Database (NTD) estimates. The results demonstrate that regenerative braking efficiency varies as an exponential function of the deceleration level, rather than an average constant as assumed in previous studies. The model predictions are demonstrated to be consistent with the NTD estimates, producing a predicted error ofmore » 1.87% and -2.31%. The paper demonstrates that energy recovery reduces the overall power consumption by 20% for the tested Chicago route. Furthermore, the paper demonstrates that the proposed modeling approach is able to capture energy consumption differences associated with train, route and operational parameters, and thus is applicable for project-level analysis. The model can be easily implemented in traffic simulation software, used in smartphone applications and eco-transit programs given its fast execution time and easy integration in complex frameworks.« less

  18. An algorithm for longitudinal registration of PET/CT images acquired during neoadjuvant chemotherapy in breast cancer: preliminary results.

    PubMed

    Li, Xia; Abramson, Richard G; Arlinghaus, Lori R; Chakravarthy, Anuradha Bapsi; Abramson, Vandana; Mayer, Ingrid; Farley, Jaime; Delbeke, Dominique; Yankeelov, Thomas E

    2012-11-16

    By providing estimates of tumor glucose metabolism, 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) can potentially characterize the response of breast tumors to treatment. To assess therapy response, serial measurements of FDG-PET parameters (derived from static and/or dynamic images) can be obtained at different time points during the course of treatment. However, most studies track the changes in average parameter values obtained from the whole tumor, thereby discarding all spatial information manifested in tumor heterogeneity. Here, we propose a method whereby serially acquired FDG-PET breast data sets can be spatially co-registered to enable the spatial comparison of parameter maps at the voxel level. The goal is to optimally register normal tissues while simultaneously preventing tumor distortion. In order to accomplish this, we constructed a PET support device to enable PET/CT imaging of the breasts of ten patients in the prone position and applied a mutual information-based rigid body registration followed by a non-rigid registration. The non-rigid registration algorithm extended the adaptive bases algorithm (ABA) by incorporating a tumor volume-preserving constraint, which computed the Jacobian determinant over the tumor regions as outlined on the PET/CT images, into the cost function. We tested this approach on ten breast cancer patients undergoing neoadjuvant chemotherapy. By both qualitative and quantitative evaluation, our constrained algorithm yielded significantly less tumor distortion than the unconstrained algorithm: considering the tumor volume determined from standard uptake value maps, the post-registration median tumor volume changes, and the 25th and 75th quantiles were 3.42% (0%, 13.39%) and 16.93% (9.21%, 49.93%) for the constrained and unconstrained algorithms, respectively (p = 0.002), while the bending energy (a measure of the smoothness of the deformation) was 0.0015 (0.0005, 0.012) and 0.017 (0.005, 0.044), respectively (p = 0.005). The results indicate that the constrained ABA algorithm can accurately align prone breast FDG-PET images acquired at different time points while keeping the tumor from being substantially compressed or distorted. NCT00474604.

  19. Cone beam CT-based set-up strategies with and without rotational correction for stereotactic body radiation therapy in the liver.

    PubMed

    Bertholet, Jenny; Worm, Esben; Høyer, Morten; Poulsen, Per

    2017-06-01

    Accurate patient positioning is crucial in stereotactic body radiation therapy (SBRT) due to a high dose regimen. Cone-beam computed tomography (CBCT) is often used for patient positioning based on radio-opaque markers. We compared six CBCT-based set-up strategies with or without rotational correction. Twenty-nine patients with three implanted markers received 3-6 fraction liver SBRT. The markers were delineated on the mid-ventilation phase of a 4D-planning-CT. One pretreatment CBCT was acquired per fraction. Set-up strategy 1 used only translational correction based on manual marker match between the CBCT and planning CT. Set-up strategy 2 used automatic 6 degrees-of-freedom registration of the vertebrae closest to the target. The 3D marker trajectories were also extracted from the projections and the mean position of each marker was calculated and used for set-up strategies 3-6. Translational correction only was used for strategy 3. Translational and rotational corrections were used for strategies 4-6 with the rotation being either vertebrae based (strategy 4), or marker based and constrained to ±3° (strategy 5) or unconstrained (strategy 6). The resulting set-up error was calculated as the 3D root-mean-square set-up error of the three markers. The set-up error of the spinal cord was calculated for all strategies. The bony anatomy set-up (2) had the largest set-up error (5.8 mm). The marker-based set-up with unconstrained rotations (6) had the smallest set-up error (0.8 mm) but the largest spinal cord set-up error (12.1 mm). The marker-based set-up with translational correction only (3) or with bony anatomy rotational correction (4) had equivalent set-up error (1.3 mm) but rotational correction reduced the spinal cord set-up error from 4.1 mm to 3.5 mm. Marker-based set-up was substantially better than bony-anatomy set-up. Rotational correction may improve the set-up, but further investigations are required to determine the optimal correction strategy.

  20. Analysis of a Two-Dimensional Thermal Cloaking Problem on the Basis of Optimization

    NASA Astrophysics Data System (ADS)

    Alekseev, G. V.

    2018-04-01

    For a two-dimensional model of thermal scattering, inverse problems arising in the development of tools for cloaking material bodies on the basis of a mixed thermal cloaking strategy are considered. By applying the optimization approach, these problems are reduced to optimization ones in which the role of controls is played by variable parameters of the medium occupying the cloaking shell and by the heat flux through a boundary segment of the basic domain. The solvability of the direct and optimization problems is proved, and an optimality system is derived. Based on its analysis, sufficient conditions on the input data are established that ensure the uniqueness and stability of optimal solutions.

  1. Solving mixed integer nonlinear programming problems using spiral dynamics optimization algorithm

    NASA Astrophysics Data System (ADS)

    Kania, Adhe; Sidarto, Kuntjoro Adji

    2016-02-01

    Many engineering and practical problem can be modeled by mixed integer nonlinear programming. This paper proposes to solve the problem with modified spiral dynamics inspired optimization method of Tamura and Yasuda. Four test cases have been examined, including problem in engineering and sport. This method succeeds in obtaining the optimal result in all test cases.

  2. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    PubMed

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  3. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    PubMed Central

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-01-01

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350

  4. Nash equilibrium and multi criterion aerodynamic optimization

    NASA Astrophysics Data System (ADS)

    Tang, Zhili; Zhang, Lianhe

    2016-06-01

    Game theory and its particular Nash Equilibrium (NE) are gaining importance in solving Multi Criterion Optimization (MCO) in engineering problems over the past decade. The solution of a MCO problem can be viewed as a NE under the concept of competitive games. This paper surveyed/proposed four efficient algorithms for calculating a NE of a MCO problem. Existence and equivalence of the solution are analyzed and proved in the paper based on fixed point theorem. Specific virtual symmetric Nash game is also presented to set up an optimization strategy for single objective optimization problems. Two numerical examples are presented to verify proposed algorithms. One is mathematical functions' optimization to illustrate detailed numerical procedures of algorithms, the other is aerodynamic drag reduction of civil transport wing fuselage configuration by using virtual game. The successful application validates efficiency of algorithms in solving complex aerodynamic optimization problem.

  5. Exact solution of large asymmetric traveling salesman problems.

    PubMed

    Miller, D L; Pekny, J F

    1991-02-15

    The traveling salesman problem is one of a class of difficult problems in combinatorial optimization that is representative of a large number of important scientific and engineering problems. A survey is given of recent applications and methods for solving large problems. In addition, an algorithm for the exact solution of the asymmetric traveling salesman problem is presented along with computational results for several classes of problems. The results show that the algorithm performs remarkably well for some classes of problems, determining an optimal solution even for problems with large numbers of cities, yet for other classes, even small problems thwart determination of a provably optimal solution.

  6. Review: Optimization methods for groundwater modeling and management

    NASA Astrophysics Data System (ADS)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  7. A new chaotic multi-verse optimization algorithm for solving engineering optimization problems

    NASA Astrophysics Data System (ADS)

    Sayed, Gehad Ismail; Darwish, Ashraf; Hassanien, Aboul Ella

    2018-03-01

    Multi-verse optimization algorithm (MVO) is one of the recent meta-heuristic optimization algorithms. The main inspiration of this algorithm came from multi-verse theory in physics. However, MVO like most optimization algorithms suffers from low convergence rate and entrapment in local optima. In this paper, a new chaotic multi-verse optimization algorithm (CMVO) is proposed to overcome these problems. The proposed CMVO is applied on 13 benchmark functions and 7 well-known design problems in the engineering and mechanical field; namely, three-bar trust, speed reduce design, pressure vessel problem, spring design, welded beam, rolling element-bearing and multiple disc clutch brake. In the current study, a modified feasible-based mechanism is employed to handle constraints. In this mechanism, four rules were used to handle the specific constraint problem through maintaining a balance between feasible and infeasible solutions. Moreover, 10 well-known chaotic maps are used to improve the performance of MVO. The experimental results showed that CMVO outperforms other meta-heuristic optimization algorithms on most of the optimization problems. Also, the results reveal that sine chaotic map is the most appropriate map to significantly boost MVO's performance.

  8. Optimal control of a harmonic oscillator: Economic interpretations

    NASA Astrophysics Data System (ADS)

    Janová, Jitka; Hampel, David

    2013-10-01

    Optimal control is a popular technique for modelling and solving the dynamic decision problems in economics. A standard interpretation of the criteria function and Lagrange multipliers in the profit maximization problem is well known. On a particular example, we aim to a deeper understanding of the possible economic interpretations of further mathematical and solution features of the optimal control problem: we focus on the solution of the optimal control problem for harmonic oscillator serving as a model for Phillips business cycle. We discuss the economic interpretations of arising mathematical objects with respect to well known reasoning for these in other problems.

  9. Pre-trained D-CNN models for detecting complex events in unconstrained videos

    NASA Astrophysics Data System (ADS)

    Robinson, Joseph P.; Fu, Yun

    2016-05-01

    Rapid event detection faces an emergent need to process large videos collections; whether surveillance videos or unconstrained web videos, the ability to automatically recognize high-level, complex events is a challenging task. Motivated by pre-existing methods being complex, computationally demanding, and often non-replicable, we designed a simple system that is quick, effective and carries minimal overhead in terms of memory and storage. Our system is clearly described, modular in nature, replicable on any Desktop, and demonstrated with extensive experiments, backed by insightful analysis on different Convolutional Neural Networks (CNNs), as stand-alone and fused with others. With a large corpus of unconstrained, real-world video data, we examine the usefulness of different CNN models as features extractors for modeling high-level events, i.e., pre-trained CNNs that differ in architectures, training data, and number of outputs. For each CNN, we use 1-fps from all training exemplar to train one-vs-rest SVMs for each event. To represent videos, frame-level features were fused using a variety of techniques. The best being to max-pool between predetermined shot boundaries, then average-pool to form the final video-level descriptor. Through extensive analysis, several insights were found on using pre-trained CNNs as off-the-shelf feature extractors for the task of event detection. Fusing SVMs of different CNNs revealed some interesting facts, finding some combinations to be complimentary. It was concluded that no single CNN works best for all events, as some events are more object-driven while others are more scene-based. Our top performance resulted from learning event-dependent weights for different CNNs.

  10. Patterns of thought: Population variation in the associations between large-scale network organisation and self-reported experiences at rest.

    PubMed

    Wang, Hao-Ting; Bzdok, Danilo; Margulies, Daniel; Craddock, Cameron; Milham, Michael; Jefferies, Elizabeth; Smallwood, Jonathan

    2018-08-01

    Contemporary cognitive neuroscience recognises unconstrained processing varies across individuals, describing variation in meaningful attributes, such as intelligence. It may also have links to patterns of on-going experience. This study examined whether dimensions of population variation in different modes of unconstrained processing can be described by the associations between patterns of neural activity and self-reports of experience during the same period. We selected 258 individuals from a publicly available data set who had measures of resting-state functional magnetic resonance imaging, and self-reports of experience during the scan. We used machine learning to determine patterns of association between the neural and self-reported data, finding variation along four dimensions. 'Purposeful' experiences were associated with lower connectivity - in particular default mode and limbic networks were less correlated with attention and sensorimotor networks. 'Emotional' experiences were associated with higher connectivity, especially between limbic and ventral attention networks. Experiences focused on themes of 'personal importance' were associated with reduced functional connectivity within attention and control systems. Finally, visual experiences were associated with stronger connectivity between visual and other networks, in particular the limbic system. Some of these patterns had contrasting links with cognitive function as assessed in a separate laboratory session - purposeful thinking was linked to greater intelligence and better abstract reasoning, while a focus on personal importance had the opposite relationship. Together these findings are consistent with an emerging literature on unconstrained states and also underlines that these states are heterogeneous, with distinct modes of population variation reflecting the interplay of different large-scale networks. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Task Context Influences Brain Activation during Music Listening

    PubMed Central

    Markovic, Andjela; Kühnis, Jürg; Jäncke, Lutz

    2017-01-01

    In this paper, we examined brain activation in subjects during two music listening conditions: listening while simultaneously rating the musical piece being played [Listening and Rating (LR)] and listening to the musical pieces unconstrained [Listening (L)]. Using these two conditions, we tested whether the sequence in which the two conditions were fulfilled influenced the brain activation observable during the L condition (LR → L or L → LR). We recorded high-density EEG during the playing of four well-known positively experienced soundtracks in two subject groups. One group started with the L condition and continued with the LR condition (L → LR); the second group performed this experiment in reversed order (LR → L). We computed from the recorded EEG the power for different frequency bands (theta, lower alpha, upper alpha, lower beta, and upper beta). Statistical analysis revealed that the power in all examined frequency bands increased during the L condition but only when the subjects had not had previous experience with the LR condition (i.e., L → LR). For the subjects who began with the LR condition, there were no power increases during the L condition. Thus, the previous experience with the LR condition prevented subjects from developing the particular mental state associated with the typical power increase in all frequency bands. The subjects without previous experience of the LR condition listened to the musical pieces in an unconstrained and undisturbed manner and showed a general power increase in all frequency bands. We interpret the fact that unconstrained music listening was associated with increased power in all examined frequency bands as a neural indicator of a mental state that can best be described as a mind-wandering state during which the subjects are “drawn into” the music. PMID:28706480

  12. Near infrared and visible face recognition based on decision fusion of LBP and DCT features

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-03-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.

  13. Advantages and Disadvantages of Transtibial, Anteromedial Portal, and Outside-In Femoral Tunnel Drilling in Single-Bundle Anterior Cruciate Ligament Reconstruction: A Systematic Review.

    PubMed

    Robin, Brett N; Jani, Sunil S; Marvil, Sean C; Reid, John B; Schillhammer, Carl K; Lubowitz, James H

    2015-07-01

    Controversy exists regarding the best method for creating the knee anterior cruciate ligament (ACL) femoral tunnel or socket. The purpose of this study was to systematically review the risks, benefits, advantages, and disadvantages of the endoscopic transtibial (TT) technique, anteromedial portal technique, outside-in technique, and outside-in retrograde drilling technique for creating the ACL femoral tunnel. A PubMed search of English-language studies published between January 1, 2000, and February 17, 2014, was performed using the following keywords: "anterior cruciate ligament" AND "femoral tunnel." Included were studies reporting risks, benefits, advantages, and/or disadvantages of any ACL femoral technique. In addition, references of included articles were reviewed to identify potential studies missed in the original search. A total of 27 articles were identified through the search. TT technique advantages include familiarity and proven long-term outcomes; disadvantages include the risk of nonanatomic placement because of constrained (TT) drilling. Anteromedial portal technique advantages include unconstrained anatomic placement; disadvantages include technical challenges, short tunnels or sockets, and posterior-wall blowout. Outside-in technique advantages include unconstrained anatomic placement; disadvantages include the need for 2 incisions. Retrograde drilling technique advantages include unconstrained anatomic placement, as well as all-epiphyseal drilling in skeletally immature patients; disadvantages include the need for fluoroscopy for all-epiphyseal drilling. There is no one, single, established "gold-standard" technique for creation of the ACL femoral socket. Four accepted techniques show diverse and subjective advantages, disadvantages, risks, and benefits. Level V, systematic review of Level II through V evidence. Copyright © 2015 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  14. Singular perturbation analysis of AOTV-related trajectory optimization problems

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Bae, Gyoung H.

    1990-01-01

    The problem of real time guidance and optimal control of Aeroassisted Orbit Transfer Vehicles (AOTV's) was addressed using singular perturbation theory as an underlying method of analysis. Trajectories were optimized with the objective of minimum energy expenditure in the atmospheric phase of the maneuver. Two major problem areas were addressed: optimal reentry, and synergetic plane change with aeroglide. For the reentry problem, several reduced order models were analyzed with the objective of optimal changes in heading with minimum energy loss. It was demonstrated that a further model order reduction to a single state model is possible through the application of singular perturbation theory. The optimal solution for the reduced problem defines an optimal altitude profile dependent on the current energy level of the vehicle. A separate boundary layer analysis is used to account for altitude and flight path angle dynamics, and to obtain lift and bank angle control solutions. By considering alternative approximations to solve the boundary layer problem, three guidance laws were derived, each having an analytic feedback form. The guidance laws were evaluated using a Maneuvering Reentry Research Vehicle model and all three laws were found to be near optimal. For the problem of synergetic plane change with aeroglide, a difficult terminal boundary layer control problem arises which to date is found to be analytically intractable. Thus a predictive/corrective solution was developed to satisfy the terminal constraints on altitude and flight path angle. A composite guidance solution was obtained by combining the optimal reentry solution with the predictive/corrective guidance method. Numerical comparisons with the corresponding optimal trajectory solutions show that the resulting performance is very close to optimal. An attempt was made to obtain numerically optimized trajectories for the case where heating rate is constrained. A first order state variable inequality constraint was imposed on the full order AOTV point mass equations of motion, using a simple aerodynamic heating rate model.

  15. Automatic image orientation detection via confidence-based integration of low-level and semantic cues.

    PubMed

    Luo, Jiebo; Boutell, Matthew

    2005-05-01

    Automatic image orientation detection for natural images is a useful, yet challenging research topic. Humans use scene context and semantic object recognition to identify the correct image orientation. However, it is difficult for a computer to perform the task in the same way because current object recognition algorithms are extremely limited in their scope and robustness. As a result, existing orientation detection methods were built upon low-level vision features such as spatial distributions of color and texture. Discrepant detection rates have been reported for these methods in the literature. We have developed a probabilistic approach to image orientation detection via confidence-based integration of low-level and semantic cues within a Bayesian framework. Our current accuracy is 90 percent for unconstrained consumer photos, impressive given the findings of a psychophysical study conducted recently. The proposed framework is an attempt to bridge the gap between computer and human vision systems and is applicable to other problems involving semantic scene content understanding.

  16. Scene text detection by leveraging multi-channel information and local context

    NASA Astrophysics Data System (ADS)

    Wang, Runmin; Qian, Shengyou; Yang, Jianfeng; Gao, Changxin

    2018-03-01

    As an important information carrier, texts play significant roles in many applications. However, text detection in unconstrained scenes is a challenging problem due to cluttered backgrounds, various appearances, uneven illumination, etc.. In this paper, an approach based on multi-channel information and local context is proposed to detect texts in natural scenes. According to character candidate detection plays a vital role in text detection system, Maximally Stable Extremal Regions(MSERs) and Graph-cut based method are integrated to obtain the character candidates by leveraging the multi-channel image information. A cascaded false positive elimination mechanism are constructed from the perspective of the character and the text line respectively. Since the local context information is very valuable for us, these information is utilized to retrieve the missing characters for boosting the text detection performance. Experimental results on two benchmark datasets, i.e., the ICDAR 2011 dataset and the ICDAR 2013 dataset, demonstrate that the proposed method have achieved the state-of-the-art performance.

  17. Aircraft Turbofan Engine Health Estimation Using Constrained Kalman Filtering

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2003-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results obtained from application to a turbofan engine model. This model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.

  18. Towards an Automated Acoustic Detection System for Free Ranging Elephants.

    PubMed

    Zeppelzauer, Matthias; Hensman, Sean; Stoeger, Angela S

    The human-elephant conflict is one of the most serious conservation problems in Asia and Africa today. The involuntary confrontation of humans and elephants claims the lives of many animals and humans every year. A promising approach to alleviate this conflict is the development of an acoustic early warning system. Such a system requires the robust automated detection of elephant vocalizations under unconstrained field conditions. Today, no system exists that fulfills these requirements. In this paper, we present a method for the automated detection of elephant vocalizations that is robust to the diverse noise sources present in the field. We evaluate the method on a dataset recorded under natural field conditions to simulate a real-world scenario. The proposed method outperformed existing approaches and robustly and accurately detected elephants. It thus can form the basis for a future automated early warning system for elephants. Furthermore, the method may be a useful tool for scientists in bioacoustics for the study of wildlife recordings.

  19. Planar dynamics of a uniform beam with rigid bodies affixed to the ends

    NASA Technical Reports Server (NTRS)

    Storch, J.; Gates, S.

    1983-01-01

    The planar dynamics of a uniform elastic beam subject to a variety of geometric and natural boundary conditions and external excitations were analyzed. The beams are inextensible and capable of small transverse bending deformations only. Classical beam vibration eigenvalue problems for a cantilever with tip mass, a cantilever with tip body and an unconstrained beam with rigid bodies at each are examined. The characteristic equations, eigenfunctions and orthogonality relations for each are derived. The forced vibration of a cantilever with tip body subject to base acceleration is analyzed. The exact solution of the governing nonhomogeneous partial differential equation with time dependent boundary conditions is presented and compared with a Rayleigh-Ritz approximate solution. The arbitrary planar motion of an elastic beam with rigid bodies at the ends is addressed. Equations of motion are derived for two modal expansions of the beam deflection. The motion equations are cast in a first order form suitable for numerical integration. Selected FORTRAN programs are provided.

  20. First- and third-party ground truth for key frame extraction from consumer video clips

    NASA Astrophysics Data System (ADS)

    Costello, Kathleen; Luo, Jiebo

    2007-02-01

    Extracting key frames (KF) from video is of great interest in many applications, such as video summary, video organization, video compression, and prints from video. KF extraction is not a new problem. However, current literature has been focused mainly on sports or news video. In the consumer video space, the biggest challenges for key frame selection from consumer videos are the unconstrained content and lack of any preimposed structure. In this study, we conduct ground truth collection of key frames from video clips taken by digital cameras (as opposed to camcorders) using both first- and third-party judges. The goals of this study are: (1) to create a reference database of video clips reasonably representative of the consumer video space; (2) to identify associated key frames by which automated algorithms can be compared and judged for effectiveness; and (3) to uncover the criteria used by both first- and thirdparty human judges so these criteria can influence algorithm design. The findings from these ground truths will be discussed.

  1. Constraining movement alters the recruitment of motor processes in mental rotation.

    PubMed

    Moreau, David

    2013-02-01

    Does mental rotation depend on the readiness to act? Recent evidence indicates that the involvement of motor processes in mental rotation is experience-dependent, suggesting that different levels of expertise in sensorimotor interactions lead to different strategies to solve mental rotation problems. Specifically, experts in motor activities perceive spatial material as objects that can be acted upon, triggering covert simulation of rotations. Because action simulation depends on the readiness to act, movement restriction should therefore disrupt mental rotation performance in individuals favoring motor processes. In this experiment, wrestlers and non-athletes judged whether pairs of three-dimensional stimuli were identical or different, with their hands either constrained or unconstrained. Wrestlers showed higher performance than controls in the rotation of geometric stimuli, but this difference disappeared when their hands were constrained. However, movement restriction had similar consequences for both groups in the rotation of hands. These findings suggest that expert's advantage in mental rotation of abstract objects is based on the readiness to act, even when physical manipulation is impossible.

  2. Age and gender classification in the wild with unsupervised feature learning

    NASA Astrophysics Data System (ADS)

    Wan, Lihong; Huo, Hong; Fang, Tao

    2017-03-01

    Inspired by unsupervised feature learning (UFL) within the self-taught learning framework, we propose a method based on UFL, convolution representation, and part-based dimensionality reduction to handle facial age and gender classification, which are two challenging problems under unconstrained circumstances. First, UFL is introduced to learn selective receptive fields (filters) automatically by applying whitening transformation and spherical k-means on random patches collected from unlabeled data. The learning process is fast and has no hyperparameters to tune. Then, the input image is convolved with these filters to obtain filtering responses on which local contrast normalization is applied. Average pooling and feature concatenation are then used to form global face representation. Finally, linear discriminant analysis with part-based strategy is presented to reduce the dimensions of the global representation and to improve classification performances further. Experiments on three challenging databases, namely, Labeled faces in the wild, Gallagher group photos, and Adience, demonstrate the effectiveness of the proposed method relative to that of state-of-the-art approaches.

  3. Arctic curves in path models from the tangent method

    NASA Astrophysics Data System (ADS)

    Di Francesco, Philippe; Lapa, Matthew F.

    2018-04-01

    Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.

  4. Singularity and steering logic for control moment gyros on flexible space structures

    NASA Astrophysics Data System (ADS)

    Hu, Quan; Guo, Chuandong; Zhang, Jun

    2017-08-01

    Control moment gyros (CMGs) are a widely used device for generating control torques for spacecraft attitude control without expending propellant. Because of its effectiveness and cleanness, it has been considered to be mounted on a space structure for active vibration suppression. The resultant system is the so-called gyroelastic body. Since CMGs could exert both torque and modal force to the structure, it can also be used to simultaneously achieve attitude maneuver and vibration reduction of a flexible spacecraft. In this paper, we consider the singularity problem in such application of CMGs. The dynamics of an unconstrained gyroelastic body is established, from which the output equations of the CMGs are extracted. Then, torque singular state and modal force singular state are defined and visualized to demonstrate the singularity. Numerical examples of several typical CMGs configurations on a gyroelastic body are given. Finally, a steering law allowing output error is designed and applied to the vibration suppression of a plate with distributed CMGs.

  5. Efficient and stable exponential time differencing Runge-Kutta methods for phase field elastic bending energy models

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoqiang; Ju, Lili; Du, Qiang

    2016-07-01

    The Willmore flow formulated by phase field dynamics based on the elastic bending energy model has been widely used to describe the shape transformation of biological lipid vesicles. In this paper, we develop and investigate some efficient and stable numerical methods for simulating the unconstrained phase field Willmore dynamics and the phase field Willmore dynamics with fixed volume and surface area constraints. The proposed methods can be high-order accurate and are completely explicit in nature, by combining exponential time differencing Runge-Kutta approximations for time integration with spectral discretizations for spatial operators on regular meshes. We also incorporate novel linear operator splitting techniques into the numerical schemes to improve the discrete energy stability. In order to avoid extra numerical instability brought by use of large penalty parameters in solving the constrained phase field Willmore dynamics problem, a modified augmented Lagrange multiplier approach is proposed and adopted. Various numerical experiments are performed to demonstrate accuracy and stability of the proposed methods.

  6. Conceptual Comparison of Population Based Metaheuristics for Engineering Problems

    PubMed Central

    Green, Paul

    2015-01-01

    Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes. PMID:25874265

  7. Conceptual comparison of population based metaheuristics for engineering problems.

    PubMed

    Adekanmbi, Oluwole; Green, Paul

    2015-01-01

    Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes.

  8. Efficiency of quantum vs. classical annealing in nonconvex learning problems

    PubMed Central

    Zecchina, Riccardo

    2018-01-01

    Quantum annealers aim at solving nonconvex optimization problems by exploiting cooperative tunneling effects to escape local minima. The underlying idea consists of designing a classical energy function whose ground states are the sought optimal solutions of the original optimization problem and add a controllable quantum transverse field to generate tunneling processes. A key challenge is to identify classes of nonconvex optimization problems for which quantum annealing remains efficient while thermal annealing fails. We show that this happens for a wide class of problems which are central to machine learning. Their energy landscapes are dominated by local minima that cause exponential slowdown of classical thermal annealers while simulated quantum annealing converges efficiently to rare dense regions of optimal solutions. PMID:29382764

  9. Direct Multiple Shooting Optimization with Variable Problem Parameters

    NASA Technical Reports Server (NTRS)

    Whitley, Ryan J.; Ocampo, Cesar A.

    2009-01-01

    Taking advantage of a novel approach to the design of the orbital transfer optimization problem and advanced non-linear programming algorithms, several optimal transfer trajectories are found for problems with and without known analytic solutions. This method treats the fixed known gravitational constants as optimization variables in order to reduce the need for an advanced initial guess. Complex periodic orbits are targeted with very simple guesses and the ability to find optimal transfers in spite of these bad guesses is successfully demonstrated. Impulsive transfers are considered for orbits in both the 2-body frame as well as the circular restricted three-body problem (CRTBP). The results with this new approach demonstrate the potential for increasing robustness for all types of orbit transfer problems.

  10. The pseudo-Boolean optimization approach to form the N-version software structure

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.

    2015-10-01

    The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality. Some additional modifications of MVP have been made to solve the problem of N-version systems design. Those algorithms take into account the discovered specific features of the objective function. The practical experiments have shown the advantage of using these algorithm modifications because of reducing a search space.

  11. Optimal control of LQR for discrete time-varying systems with input delays

    NASA Astrophysics Data System (ADS)

    Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng

    2018-04-01

    In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.

  12. Exact solution for an optimal impermeable parachute problem

    NASA Astrophysics Data System (ADS)

    Lupu, Mircea; Scheiber, Ernest

    2002-10-01

    In the paper there are solved direct and inverse boundary problems and analytical solutions are obtained for optimization problems in the case of some nonlinear integral operators. It is modeled the plane potential flow of an inviscid, incompressible and nonlimited fluid jet, witch encounters a symmetrical, curvilinear obstacle--the deflector of maximal drag. There are derived integral singular equations, for direct and inverse problems and the movement in the auxiliary canonical half-plane is obtained. Next, the optimization problem is solved in an analytical manner. The design of the optimal airfoil is performed and finally, numerical computations concerning the drag coefficient and other geometrical and aerodynamical parameters are carried out. This model corresponds to the Helmholtz impermeable parachute problem.

  13. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  14. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  15. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods

    PubMed Central

    Zatsiorsky, Vladimir M.

    2011-01-01

    One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907

  16. Is Perceptual Narrowing Too Narrow?

    ERIC Educational Resources Information Center

    Cashon, Cara H.; Denicola, Christopher A.

    2011-01-01

    There is a growing list of examples illustrating that infants are transitioning from having earlier abilities that appear more "universal," "broadly tuned," or "unconstrained" to having later abilities that appear more "specialized," "narrowly tuned," or "constrained." Perceptual narrowing, a well-known phenomenon related to face, speech, and…

  17. Towards Mapping the Provision of Ecosystem Services from Headwater Wetlands in the Susquehanna River Basin

    EPA Science Inventory

    Headwater wetlands provide a range of ecosystem services including habitat provisioning and flood retention. Following the River Ecosystem Synthesis framework we identified and assessed not only headwater wetlands, but unconstrained reaches with the potential to support diverse s...

  18. The Shock and Vibration Digest. Volume 13, Number 11

    DTIC Science & Technology

    1981-11-01

    Beams with Unconstrained Damping Treatment G.R. Bhashyam and G. Prathap S. Narayanan, J.P. Verma, and A.K. Mallik Dept. of Aerospace and Mech. Engrg...2337 Sasaki, R ............... 2297 Mallik , A.K ............. 2384 Ookuma, M ............. 2463 Sasakura, Y ............. 2503 85 WaeskA

  19. Compensation for Unconstrained Catheter Shaft Motion in Cardiac Catheters

    PubMed Central

    Degirmenci, Alperen; Loschak, Paul M.; Tschabrunn, Cory M.; Anter, Elad; Howe, Robert D.

    2016-01-01

    Cardiac catheterization with ultrasound (US) imaging catheters provides real time US imaging from within the heart, but manually navigating a four degree of freedom (DOF) imaging catheter is difficult and requires extensive training. Existing work has demonstrated robotic catheter steering in constrained bench top environments. Closed-loop control in an unconstrained setting, such as patient vasculature, remains a significant challenge due to friction, backlash, and physiological disturbances. In this paper we present a new method for closed-loop control of the catheter tip that can accurately and robustly steer 4-DOF cardiac catheters and other flexible manipulators despite these effects. The performance of the system is demonstrated in a vasculature phantom and an in vivo porcine animal model. During bench top studies the robotic system converged to the desired US imager pose with sub-millimeter and sub-degree-level accuracy. During animal trials the system achieved 2.0 mm and 0.65° accuracy. Accurate and robust robotic navigation of flexible manipulators will enable enhanced visualization and treatment during procedures. PMID:27525170

  20. Unconstrained Respiration Measurement and Respiratory Arrest Detection Method by Dynamic Threshold in Transferring Patients by Stretchers

    NASA Astrophysics Data System (ADS)

    Kurihara, Yosuke; Watanabe, Kajiro; Kobayashi, Kazuyuki; Tanaka, Hiroshi

    General anesthesia used for surgical operations may cause unstable conditions of the patients after the operations, which could lead to respiratory arrests. Under such circumstances, nurses could fail in finding the change of the conditions, and other malpractices could also occur. It is highly possible that such malpractices may occur while transferring a patient from ICU to the room using a stretcher. Monitoring the change in the blood oxygen saturation concentration and other vital signs to detect a respiratory arrest is not easy when transferring a patient on a stretcher. Here we present several noise reduction system and algorithm to detect respiratory arrests in transferring a patient, based on the unconstrained air pressure method that the authors presented previously. As the result, when the acceleration level of the stretcher noise was 0.5G, the respiratory arrest detection ratio using this novel method was 65%, while that with the conventional method was 0%.

Top