Sample records for unconstrained optimization software

  1. Modeling Latent Interactions at Level 2 in Multilevel Structural Equation Models: An Evaluation of Mean-Centered and Residual-Centered Unconstrained Approaches

    ERIC Educational Resources Information Center

    Leite, Walter L.; Zuo, Youzhen

    2011-01-01

    Among the many methods currently available for estimating latent variable interactions, the unconstrained approach is attractive to applied researchers because of its relatively easy implementation with any structural equation modeling (SEM) software. Using a Monte Carlo simulation study, we extended and evaluated the unconstrained approach to…

  2. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  3. An improved marriage in honey bees optimization algorithm for single objective unconstrained optimization.

    PubMed

    Celik, Yuksel; Ulker, Erkan

    2013-01-01

    Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms.

  4. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  5. An Improved Marriage in Honey Bees Optimization Algorithm for Single Objective Unconstrained Optimization

    PubMed Central

    Celik, Yuksel; Ulker, Erkan

    2013-01-01

    Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms. PMID:23935416

  6. Parallel-vector computation for structural analysis and nonlinear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1990-01-01

    Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.

  7. A modified conjugate gradient coefficient with inexact line search for unconstrained optimization

    NASA Astrophysics Data System (ADS)

    Aini, Nurul; Rivaie, Mohd; Mamat, Mustafa

    2016-11-01

    Conjugate gradient (CG) method is a line search algorithm mostly known for its wide application in solving unconstrained optimization problems. Its low memory requirements and global convergence properties makes it one of the most preferred method in real life application such as in engineering and business. In this paper, we present a new CG method based on AMR* and CD method for solving unconstrained optimization functions. The resulting algorithm is proven to have both the sufficient descent and global convergence properties under inexact line search. Numerical tests are conducted to assess the effectiveness of the new method in comparison to some previous CG methods. The results obtained indicate that our method is indeed superior.

  8. A general-purpose optimization program for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Sugimoto, H.

    1986-01-01

    A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis) is a FORTRAN program for nonlinear constrained (or unconstrained) function minimization. The optimization process is segmented into three levels: Strategy, Optimizer, and One-dimensional search. At each level, several options are available so that a total of nearly 100 possible combinations can be created. An example of available combinations is the Augmented Lagrange Multiplier method, using the BFGS variable metric unconstrained minimization together with polynomial interpolation for the one-dimensional search.

  9. An indirect method for numerical optimization using the Kreisselmeir-Steinhauser function

    NASA Technical Reports Server (NTRS)

    Wrenn, Gregory A.

    1989-01-01

    A technique is described for converting a constrained optimization problem into an unconstrained problem. The technique transforms one of more objective functions into reduced objective functions, which are analogous to goal constraints used in the goal programming method. These reduced objective functions are appended to the set of constraints and an envelope of the entire function set is computed using the Kreisselmeir-Steinhauser function. This envelope function is then searched for an unconstrained minimum. The technique may be categorized as a SUMT algorithm. Advantages of this approach are the use of unconstrained optimization methods to find a constrained minimum without the draw down factor typical of penalty function methods, and that the technique may be started from the feasible or infeasible design space. In multiobjective applications, the approach has the advantage of locating a compromise minimum design without the need to optimize for each individual objective function separately.

  10. JWST Wavefront Control Toolbox

    NASA Technical Reports Server (NTRS)

    Shin, Shahram Ron; Aronstein, David L.

    2011-01-01

    A Matlab-based toolbox has been developed for the wavefront control and optimization of segmented optical surfaces to correct for possible misalignments of James Webb Space Telescope (JWST) using influence functions. The toolbox employs both iterative and non-iterative methods to converge to an optimal solution by minimizing the cost function. The toolbox could be used in either of constrained and unconstrained optimizations. The control process involves 1 to 7 degrees-of-freedom perturbations per segment of primary mirror in addition to the 5 degrees of freedom of secondary mirror. The toolbox consists of a series of Matlab/Simulink functions and modules, developed based on a "wrapper" approach, that handles the interface and data flow between existing commercial optical modeling software packages such as Zemax and Code V. The limitations of the algorithm are dictated by the constraints of the moving parts in the mirrors.

  11. Accuracy in breast shape alignment with 3D surface fitting algorithms.

    PubMed

    Riboldi, Marco; Gierga, David P; Chen, George T Y; Baroni, Guido

    2009-04-01

    Surface imaging is in use in radiotherapy clinical practice for patient setup optimization and monitoring. Breast alignment is accomplished by searching for a tentative spatial correspondence between the reference and daily surface shape models. In this study, the authors quantify whole breast shape alignment by relying on texture features digitized on 3D surface models. Texture feature localization was validated through repeated measurements in a silicone breast phantom, mounted on a high precision mechanical stage. Clinical investigations on breast shape alignment included 133 fractions in 18 patients treated with accelerated partial breast irradiation. The breast shape was detected with a 3D video based surface imaging system so that breathing was compensated. An in-house algorithm for breast alignment, based on surface fitting constrained by nipple matching (constrained surface fitting), was applied. Results were compared with a commercial software where no constraints are utilized (unconstrained surface fitting). Texture feature localization was validated within 2 mm in each anatomical direction. Clinical data show that unconstrained surface fitting achieves adequate accuracy in most cases, though nipple mismatch is considerably higher than residual surface distances (3.9 mm vs 0.6 mm on average). Outliers beyond 1 cm can be experienced as the result of a degenerate surface fit, where unconstrained surface fitting is not sufficient to establish spatial correspondence. In the constrained surface fitting algorithm, average surface mismatch within 1 mm was obtained when nipple position was forced to match in the [1.5; 5] mm range. In conclusion, optimal results can be obtained by trading off the desired overall surface congruence vs matching of selected landmarks (constraint). Constrained surface fitting is put forward to represent an improvement in setup accuracy for those applications where whole breast positional reproducibility is an issue.

  12. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  13. A three-term conjugate gradient method under the strong-Wolfe line search

    NASA Astrophysics Data System (ADS)

    Khadijah, Wan; Rivaie, Mohd; Mamat, Mustafa

    2017-08-01

    Recently, numerous studies have been concerned in conjugate gradient methods for solving large-scale unconstrained optimization method. In this paper, a three-term conjugate gradient method is proposed for unconstrained optimization which always satisfies sufficient descent direction and namely as Three-Term Rivaie-Mustafa-Ismail-Leong (TTRMIL). Under standard conditions, TTRMIL method is proved to be globally convergent under strong-Wolfe line search. Finally, numerical results are provided for the purpose of comparison.

  14. A new family of Polak-Ribiere-Polyak conjugate gradient method with the strong-Wolfe line search

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    Conjugate gradient (CG) method is an important technique in unconstrained optimization, due to its effectiveness and low memory requirements. The focus of this paper is to introduce a new CG method for solving large scale unconstrained optimization. Theoretical proofs show that the new method fulfills sufficient descent condition if strong Wolfe-Powell inexact line search is used. Besides, computational results show that our proposed method outperforms to other existing CG methods.

  15. A modified form of conjugate gradient method for unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa

    2016-06-01

    Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Yousong, E-mail: yousong.luo@rmit.edu.au

    This paper deals with a class of optimal control problems governed by an initial-boundary value problem of a parabolic equation. The case of semi-linear boundary control is studied where the control is applied to the system via the Wentzell boundary condition. The differentiability of the state variable with respect to the control is established and hence a necessary condition is derived for the optimal solution in the case of both unconstrained and constrained problems. The condition is also sufficient for the unconstrained convex problems. A second order condition is also derived.

  17. Optimal projection method determination by Logdet Divergence and perturbed von-Neumann Divergence.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Qiu, Yushan; Cheng, Xiao-Qing

    2017-12-14

    Positive semi-definiteness is a critical property in kernel methods for Support Vector Machine (SVM) by which efficient solutions can be guaranteed through convex quadratic programming. However, a lot of similarity functions in applications do not produce positive semi-definite kernels. We propose projection method by constructing projection matrix on indefinite kernels. As a generalization of the spectrum method (denoising method and flipping method), the projection method shows better or comparable performance comparing to the corresponding indefinite kernel methods on a number of real world data sets. Under the Bregman matrix divergence theory, we can find suggested optimal λ in projection method using unconstrained optimization in kernel learning. In this paper we focus on optimal λ determination, in the pursuit of precise optimal λ determination method in unconstrained optimization framework. We developed a perturbed von-Neumann divergence to measure kernel relationships. We compared optimal λ determination with Logdet Divergence and perturbed von-Neumann Divergence, aiming at finding better λ in projection method. Results on a number of real world data sets show that projection method with optimal λ by Logdet divergence demonstrate near optimal performance. And the perturbed von-Neumann Divergence can help determine a relatively better optimal projection method. Projection method ia easy to use for dealing with indefinite kernels. And the parameter embedded in the method can be determined through unconstrained optimization under Bregman matrix divergence theory. This may provide a new way in kernel SVMs for varied objectives.

  18. A new nonlinear conjugate gradient coefficient under strong Wolfe-Powell line search

    NASA Astrophysics Data System (ADS)

    Mohamed, Nur Syarafina; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    A nonlinear conjugate gradient method (CG) plays an important role in solving a large-scale unconstrained optimization problem. This method is widely used due to its simplicity. The method is known to possess sufficient descend condition and global convergence properties. In this paper, a new nonlinear of CG coefficient βk is presented by employing the Strong Wolfe-Powell inexact line search. The new βk performance is tested based on number of iterations and central processing unit (CPU) time by using MATLAB software with Intel Core i7-3470 CPU processor. Numerical experimental results show that the new βk converge rapidly compared to other classical CG method.

  19. Hybrid DFP-CG method for solving unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Osman, Wan Farah Hanan Wan; Asrul Hery Ibrahim, Mohd; Mamat, Mustafa

    2017-09-01

    The conjugate gradient (CG) method and quasi-Newton method are both well known method for solving unconstrained optimization method. In this paper, we proposed a new method by combining the search direction between conjugate gradient method and quasi-Newton method based on BFGS-CG method developed by Ibrahim et al. The Davidon-Fletcher-Powell (DFP) update formula is used as an approximation of Hessian for this new hybrid algorithm. Numerical result showed that the new algorithm perform well than the ordinary DFP method and proven to posses both sufficient descent and global convergence properties.

  20. Optimization of flexible wing structures subject to strength and induced drag constraints

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1977-01-01

    An optimization procedure for designing wing structures subject to stress, strain, and drag constraints is presented. The optimization method utilizes an extended penalty function formulation for converting the constrained problem into a series of unconstrained ones. Newton's method is used to solve the unconstrained problems. An iterative analysis procedure is used to obtain the displacements of the wing structure including the effects of load redistribution due to the flexibility of the structure. The induced drag is calculated from the lift distribution. Approximate expressions for the constraints used during major portions of the optimization process enhance the efficiency of the procedure. A typical fighter wing is used to demonstrate the procedure. Aluminum and composite material designs are obtained. The tradeoff between weight savings and drag reduction is investigated.

  1. Trajectory optimization and guidance law development for national aerospace plane applications

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Flandro, G. A.; Corban, J. E.

    1988-01-01

    The work completed to date is comprised of the following: a simple vehicle model representative of the aerospace plane concept in the hypersonic flight regime, fuel-optimal climb profiles for the unconstrained and dynamic pressure constrained cases generated using a reduced order dynamic model, an analytic switching condition for transition to rocket powered flight as orbital velocity is approached, simple feedback guidance laws for both the unconstrained and dynamic pressure constrained cases derived via singular perturbation theory and a nonlinear transformation technique, and numerical simulation results for ascent to orbit in the dynamic pressure constrained case.

  2. Structural Optimization for Reliability Using Nonlinear Goal Programming

    NASA Technical Reports Server (NTRS)

    El-Sayed, Mohamed E.

    1999-01-01

    This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.

  3. An Improved Quantum-Behaved Particle Swarm Optimization Algorithm with Elitist Breeding for Unconstrained Optimization.

    PubMed

    Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing

    2015-01-01

    An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.

  4. Performance comparison of a new hybrid conjugate gradient method under exact and inexact line searches

    NASA Astrophysics Data System (ADS)

    Ghani, N. H. A.; Mohamed, N. S.; Zull, N.; Shoid, S.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is one of iterative techniques prominently used in solving unconstrained optimization problems due to its simplicity, low memory storage, and good convergence analysis. This paper presents a new hybrid conjugate gradient method, named NRM1 method. The method is analyzed under the exact and inexact line searches in given conditions. Theoretically, proofs show that the NRM1 method satisfies the sufficient descent condition with both line searches. The computational result indicates that NRM1 method is capable in solving the standard unconstrained optimization problems used. On the other hand, the NRM1 method performs better under inexact line search compared with exact line search.

  5. An overview of unconstrained free boundary problems

    PubMed Central

    Figalli, Alessio; Shahgholian, Henrik

    2015-01-01

    In this paper, we present a survey concerning unconstrained free boundary problems of type where B1 is the unit ball, Ω is an unknown open set, F1 and F2 are elliptic operators (admitting regular solutions), and is a functions space to be specified in each case. Our main objective is to discuss a unifying approach to the optimal regularity of solutions to the above matching problems, and list several open problems in this direction. PMID:26261367

  6. Unconstrained paving and plastering method for generating finite element meshes

    DOEpatents

    Staten, Matthew L.; Owen, Steven J.; Blacker, Teddy D.; Kerr, Robert

    2010-03-02

    Computer software for and a method of generating a conformal all quadrilateral or hexahedral mesh comprising selecting an object with unmeshed boundaries and performing the following while unmeshed voids are larger than twice a desired element size and unrecognizable as either a midpoint subdividable or pave-and-sweepable polyhedra: selecting a front to advance; based on sizes of fronts and angles with adjacent fronts, determining which adjacent fronts should be advanced with the selected front; advancing the fronts; detecting proximities with other nearby fronts; resolving any found proximities; forming quadrilaterals or unconstrained columns of hexahedra where two layers cross; and establishing hexahedral elements where three layers cross.

  7. Fitting Nonlinear Curves by use of Optimization Techniques

    NASA Technical Reports Server (NTRS)

    Hill, Scott A.

    2005-01-01

    MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.

  8. MO-FG-CAMPUS-TeP2-01: A Graph Form ADMM Algorithm for Constrained Quadratic Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, X; Belcher, AH; Wiersma, R

    Purpose: In radiation therapy optimization the constraints can be either hard constraints which must be satisfied or soft constraints which are included but do not need to be satisfied exactly. Currently the voxel dose constraints are viewed as soft constraints and included as a part of the objective function and approximated as an unconstrained problem. However in some treatment planning cases the constraints should be specified as hard constraints and solved by constrained optimization. The goal of this work is to present a computation efficiency graph form alternating direction method of multipliers (ADMM) algorithm for constrained quadratic treatment planning optimizationmore » and compare it with several commonly used algorithms/toolbox. Method: ADMM can be viewed as an attempt to blend the benefits of dual decomposition and augmented Lagrangian methods for constrained optimization. Various proximal operators were first constructed as applicable to quadratic IMRT constrained optimization and the problem was formulated in a graph form of ADMM. A pre-iteration operation for the projection of a point to a graph was also proposed to further accelerate the computation. Result: The graph form ADMM algorithm was tested by the Common Optimization for Radiation Therapy (CORT) dataset including TG119, prostate, liver, and head & neck cases. Both unconstrained and constrained optimization problems were formulated for comparison purposes. All optimizations were solved by LBFGS, IPOPT, Matlab built-in toolbox, CVX (implementing SeDuMi) and Mosek solvers. For unconstrained optimization, it was found that LBFGS performs the best, and it was 3–5 times faster than graph form ADMM. However, for constrained optimization, graph form ADMM was 8 – 100 times faster than the other solvers. Conclusion: A graph form ADMM can be applied to constrained quadratic IMRT optimization. It is more computationally efficient than several other commercial and noncommercial optimizers and it also used significantly less computer memory.« less

  9. Number-unconstrained quantum sensing

    NASA Astrophysics Data System (ADS)

    Mitchell, Morgan W.

    2017-12-01

    Quantum sensing is commonly described as a constrained optimization problem: maximize the information gained about an unknown quantity using a limited number of particles. Important sensors including gravitational wave interferometers and some atomic sensors do not appear to fit this description, because there is no external constraint on particle number. Here, we develop the theory of particle-number-unconstrained quantum sensing, and describe how optimal particle numbers emerge from the competition of particle-environment and particle-particle interactions. We apply the theory to optical probing of an atomic medium modeled as a resonant, saturable absorber, and observe the emergence of well-defined finite optima without external constraints. The results contradict some expectations from number-constrained quantum sensing and show that probing with squeezed beams can give a large sensitivity advantage over classical strategies when each is optimized for particle number.

  10. Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2005-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.

  11. Optimal lifting ascent trajectories for the space shuttle

    NASA Technical Reports Server (NTRS)

    Rau, T. R.; Elliott, J. R.

    1972-01-01

    The performance gains which are possible through the use of optimal trajectories for a particular space shuttle configuration are discussed. The spacecraft configurations and aerodynamic characteristics are described. Shuttle mission payload capability is examined with respect to the optimal orbit inclination for unconstrained, constrained, and nonlifting conditions. The effects of velocity loss and heating rate on the optimal ascent trajectory are investigated.

  12. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    PubMed

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  13. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)β k ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  14. Poster — Thur Eve — 69: Computational Study of DVH-guided Cancer Treatment Planning Optimization Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghomi, Pooyan Shirvani; Zinchenko, Yuriy

    2014-08-15

    Purpose: To compare methods to incorporate the Dose Volume Histogram (DVH) curves into the treatment planning optimization. Method: The performance of three methods, namely, the conventional Mixed Integer Programming (MIP) model, a convex moment-based constrained optimization approach, and an unconstrained convex moment-based penalty approach, is compared using anonymized data of a prostate cancer patient. Three plans we generated using the corresponding optimization models. Four Organs at Risk (OARs) and one Tumor were involved in the treatment planning. The OARs and Tumor were discretized into total of 50,221 voxels. The number of beamlets was 943. We used commercially available optimization softwaremore » Gurobi and Matlab to solve the models. Plan comparison was done by recording the model runtime followed by visual inspection of the resulting dose volume histograms. Conclusion: We demonstrate the effectiveness of the moment-based approaches to replicate the set of prescribed DVH curves. The unconstrained convex moment-based penalty approach is concluded to have the greatest potential to reduce the computational effort and holds a promise of substantial computational speed up.« less

  15. Application of Sequential Quadratic Programming to Minimize Smart Active Flap Rotor Hub Loads

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi; Leyland, Jane

    2014-01-01

    In an analytical study, SMART active flap rotor hub loads have been minimized using nonlinear programming constrained optimization methodology. The recently developed NLPQLP system (Schittkowski, 2010) that employs Sequential Quadratic Programming (SQP) as its core algorithm was embedded into a driver code (NLP10x10) specifically designed to minimize active flap rotor hub loads (Leyland, 2014). Three types of practical constraints on the flap deflections have been considered. To validate the current application, two other optimization methods have been used: i) the standard, linear unconstrained method, and ii) the nonlinear Generalized Reduced Gradient (GRG) method with constraints. The new software code NLP10x10 has been systematically checked out. It has been verified that NLP10x10 is functioning as desired. The following are briefly covered in this paper: relevant optimization theory; implementation of the capability of minimizing a metric of all, or a subset, of the hub loads as well as the capability of using all, or a subset, of the flap harmonics; and finally, solutions for the SMART rotor. The eventual goal is to implement NLP10x10 in a real-time wind tunnel environment.

  16. Monolithic, multi-bandgap, tandem, ultra-thin, strain-counterbalanced, photovoltaic energy converters with optimal subcell bandgaps

    DOEpatents

    Wanlass, Mark W [Golden, CO; Mascarenhas, Angelo [Lakewood, CO

    2012-05-08

    Modeling a monolithic, multi-bandgap, tandem, solar photovoltaic converter or thermophotovoltaic converter by constraining the bandgap value for the bottom subcell to no less than a particular value produces an optimum combination of subcell bandgaps that provide theoretical energy conversion efficiencies nearly as good as unconstrained maximum theoretical conversion efficiency models, but which are more conducive to actual fabrication to achieve such conversion efficiencies than unconstrained model optimum bandgap combinations. Achieving such constrained or unconstrained optimum bandgap combinations includes growth of a graded layer transition from larger lattice constant on the parent substrate to a smaller lattice constant to accommodate higher bandgap upper subcells and at least one graded layer that transitions back to a larger lattice constant to accommodate lower bandgap lower subcells and to counter-strain the epistructure to mitigate epistructure bowing.

  17. Steepest descent method implementation on unconstrained optimization problem using C++ program

    NASA Astrophysics Data System (ADS)

    Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.

    2018-03-01

    Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.

  18. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on in-house object-oriented optimization tool.

  19. Constrained growth flips the direction of optimal phenological responses among annual plants.

    PubMed

    Lindh, Magnus; Johansson, Jacob; Bolmgren, Kjell; Lundström, Niklas L P; Brännström, Åke; Jonzén, Niclas

    2016-03-01

    Phenological changes among plants due to climate change are well documented, but often hard to interpret. In order to assess the adaptive value of observed changes, we study how annual plants with and without growth constraints should optimize their flowering time when productivity and season length changes. We consider growth constraints that depend on the plant's vegetative mass: self-shading, costs for nonphotosynthetic structural tissue and sibling competition. We derive the optimal flowering time from a dynamic energy allocation model using optimal control theory. We prove that an immediate switch (bang-bang control) from vegetative to reproductive growth is optimal with constrained growth and constant mortality. Increasing mean productivity, while keeping season length constant and growth unconstrained, delayed the optimal flowering time. When growth was constrained and productivity was relatively high, the optimal flowering time advanced instead. When the growth season was extended equally at both ends, the optimal flowering time was advanced under constrained growth and delayed under unconstrained growth. Our results suggests that growth constraints are key factors to consider when interpreting phenological flowering responses. It can help to explain phenological patterns along productivity gradients, and links empirical observations made on calendar scales with life-history theory. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.

  20. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  1. Experimental and simulation studies of multivariable adaptive optimization of continuous bioreactors using bilevel forgetting factors.

    PubMed

    Chang, Y K; Lim, H C

    1989-08-20

    A multivariable on-line adaptive optimization algorithm using a bilevel forgetting factor method was developed and applied to a continuous baker's yeast culture in simulation and experimental studies to maximize the cellular productivity by manipulating the dilution rate and the temperature. The algorithm showed a good optimization speed and a good adaptability and reoptimization capability. The algorithm was able to stably maintain the process around the optimum point for an extended period of time. Two cases were investigated: an unconstrained and a constrained optimization. In the constrained optimization the ethanol concentration was used as an index for the baking quality of yeast cells. An equality constraint with a quadratic penalty was imposed on the ethanol concentration to keep its level close to a hypothetical "optimum" value. The developed algorithm was experimentally applied to a baker's yeast culture to demonstrate its validity. Only unconstrained optimization was carried out experimentally. A set of tuning parameter values was suggested after evaluating the results from several experimental runs. With those tuning parameter values the optimization took 50-90 h. At the attained steady state the dilution rate was 0.310 h(-1) the temperature 32.8 degrees C, and the cellular productivity 1.50 g/L/h.

  2. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  3. An expert system for choosing the best combination of options in a general purpose program for automated design synthesis

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.; Barthelemy, J.-F. M.

    1986-01-01

    An expert system called EXADS has been developed to aid users of the Automated Design Synthesis (ADS) general purpose optimization program. ADS has approximately 100 combinations of strategy, optimizer, and one-dimensional search options from which to choose. It is difficult for a nonexpert to make this choice. This expert system aids the user in choosing the best combination of options based on the users knowledge of the problem and the expert knowledge stored in the knowledge base. The knowledge base is divided into three categories; constrained problems, unconstrained problems, and constrained problems being treated as unconstrained problems. The inference engine and rules are written in LISP, contains about 200 rules, and executes on DEC-VAX (with Franz-LISP) and IBM PC (with IQ-LISP) computers.

  4. Distributed Learning, Extremum Seeking, and Model-Free Optimization for the Resilient Coordination of Multi-Agent Adversarial Groups

    DTIC Science & Technology

    2016-09-07

    been demonstrated on maximum power point tracking for photovoltaic arrays and for wind turbines . 3. ES has recently been implemented on the Mars...high-dimensional optimization problems . Extensions and applications of these techniques were developed during the realization of the project. 15...studied problems of dynamic average consensus and a class of unconstrained continuous-time optimization algorithms for the coordination of multiple

  5. Petermann I and II spot size: Accurate semi analytical description involving Nelder-Mead method of nonlinear unconstrained optimization and three parameter fundamental modal field

    NASA Astrophysics Data System (ADS)

    Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal

    2013-01-01

    A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.

  6. Adaptive Constrained Optimal Control Design for Data-Based Nonlinear Discrete-Time Systems With Critic-Only Structure.

    PubMed

    Luo, Biao; Liu, Derong; Wu, Huai-Ning

    2018-06-01

    Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice. To overcome this problem, the system transformation is first introduced with the general performance index. Then, the constrained optimal control problem is converted to an unconstrained optimal control problem. By introducing the action-state value function, i.e., Q-function, the VIQL algorithm is proposed to learn the optimal Q-function of the data-based unconstrained optimal control problem. The convergence results of the VIQL algorithm are established with an easy-to-realize initial condition . To implement the VIQL algorithm, the critic-only structure is developed, where only one neural network is required to approximate the Q-function. The converged Q-function obtained from the critic-only VIQL method is employed to design the adaptive constrained optimal controller based on the gradient descent scheme. Finally, the effectiveness of the developed adaptive control method is tested on three examples with computer simulation.

  7. Gaussian Accelerated Molecular Dynamics: Theory, Implementation, and Applications

    PubMed Central

    Miao, Yinglong; McCammon, J. Andrew

    2018-01-01

    A novel Gaussian Accelerated Molecular Dynamics (GaMD) method has been developed for simultaneous unconstrained enhanced sampling and free energy calculation of biomolecules. Without the need to set predefined reaction coordinates, GaMD enables unconstrained enhanced sampling of the biomolecules. Furthermore, by constructing a boost potential that follows a Gaussian distribution, accurate reweighting of GaMD simulations is achieved via cumulant expansion to the second order. The free energy profiles obtained from GaMD simulations allow us to identify distinct low energy states of the biomolecules and characterize biomolecular structural dynamics quantitatively. In this chapter, we present the theory of GaMD, its implementation in the widely used molecular dynamics software packages (AMBER and NAMD), and applications to the alanine dipeptide biomolecular model system, protein folding, biomolecular large-scale conformational transitions and biomolecular recognition. PMID:29720925

  8. Quadratic Optimization in the Problems of Active Control of Sound

    NASA Technical Reports Server (NTRS)

    Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).

  9. Atlas-Independent, Electrophysiological Mapping of the Optimal Locus of Subthalamic Deep Brain Stimulation for the Motor Symptoms of Parkinson Disease.

    PubMed

    Conrad, Erin C; Mossner, James M; Chou, Kelvin L; Patil, Parag G

    2018-05-23

    Deep brain stimulation (DBS) of the subthalamic nucleus (STN) improves motor symptoms of Parkinson disease (PD). However, motor outcomes can be variable, perhaps due to inconsistent positioning of the active contact relative to an unknown optimal locus of stimulation. Here, we determine the optimal locus of STN stimulation in a geometrically unconstrained, mathematically precise, and atlas-independent manner, using Unified Parkinson Disease Rating Scale (UPDRS) motor outcomes and an electrophysiological neuronal stimulation model. In 20 patients with PD, we mapped motor improvement to active electrode location, relative to the individual, directly MRI-visualized STN. Our analysis included a novel, unconstrained and computational electrical-field model of neuronal activation to estimate the optimal locus of DBS. We mapped the optimal locus to a tightly defined ovoid region 0.49 mm lateral, 0.88 mm posterior, and 2.63 mm dorsal to the anatomical midpoint of the STN. On average, this locus is 11.75 lateral, 1.84 mm posterior, and 1.08 mm ventral to the mid-commissural point. Our novel, atlas-independent method reveals a single, ovoid optimal locus of stimulation in STN DBS for PD. The methodology, here applied to UPDRS and PD, is generalizable to atlas-independent mapping of other motor and non-motor effects of DBS. © 2018 S. Karger AG, Basel.

  10. Dai-Kou type conjugate gradient methods with a line search only using gradient.

    PubMed

    Huang, Yuanyuan; Liu, Changhe

    2017-01-01

    In this paper, the Dai-Kou type conjugate gradient methods are developed to solve the optimality condition of an unconstrained optimization, they only utilize gradient information and have broader application scope. Under suitable conditions, the developed methods are globally convergent. Numerical tests and comparisons with the PRP+ conjugate gradient method only using gradient show that the methods are efficient.

  11. Extremal Optimization for Quadratic Unconstrained Binary Problems

    NASA Astrophysics Data System (ADS)

    Boettcher, S.

    We present an implementation of τ-EO for quadratic unconstrained binary optimization (QUBO) problems. To this end, we transform modify QUBO from its conventional Boolean presentation into a spin glass with a random external field on each site. These fields tend to be rather large compared to the typical coupling, presenting EO with a challenging two-scale problem, exploring smaller differences in couplings effectively while sufficiently aligning with those strong external fields. However, we also find a simple solution to that problem that indicates that those external fields apparently tilt the energy landscape to a such a degree such that global minima become more easy to find than those of spin glasses without (or very small) fields. We explore the impact of the weight distribution of the QUBO formulation in the operations research literature and analyze their meaning in a spin-glass language. This is significant because QUBO problems are considered among the main contenders for NP-hard problems that could be solved efficiently on a quantum computer such as D-Wave.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  13. Experiential effects on mirror systems and social learning: implications for social intelligence.

    PubMed

    Reader, Simon M

    2014-04-01

    Investigations of biases and experiential effects on social learning, social information use, and mirror systems can usefully inform one another. Unconstrained learning is predicted to shape mirror systems when the optimal response to an observed act varies, but constraints may emerge when immediate error-free responses are required and evolutionary or developmental history reliably predicts the optimal response. Given the power of associative learning, such constraints may be rare.

  14. A Comparison of Approaches for Solving Hard Graph-Theoretic Problems

    DTIC Science & Technology

    2015-04-29

    can be converted to a quadratic unconstrained binary optimization ( QUBO ) problem that uses 0/1-valued variables, and so they are often used...Frontiers in Physics, 2:5 (12 Feb 2014). [7] “Programming with QUBOs ,” (instructional document) D-Wave: The Quantum Computing Company, 2013. [8

  15. The use of optimization techniques to design controlled diffusion compressor blading

    NASA Technical Reports Server (NTRS)

    Sanger, N. L.

    1982-01-01

    A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.

  16. Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables

    NASA Astrophysics Data System (ADS)

    Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.

    2018-02-01

    In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.

  17. Conceptual/preliminary design study of subsonic v/stol and stovl aircraft derivatives of the S-3A

    NASA Technical Reports Server (NTRS)

    Kidwell, G. H., Jr.

    1981-01-01

    A computerized aircraft synthesis program was used to examine the feasibility and capability of a V/STOL aircraft based on the Navy S-3A aircraft. Two major airframe modifications are considered: replacement of the wing, and substitution of deflected thrust turbofan engines similar to the Pegasus engine. Three planform configurations for the all composite wing were investigated: an unconstrained span design, a design with the span constrained to 64 feet, and an unconstrained span oblique wing design. Each design was optimized using the same design variables, and performance and control analyses were performed. The oblique wing configuration was found to have the greatest potential in this application. The mission performance of these V/STOL aircraft compares favorably with that of the CTOL S-3A.

  18. First-order convex feasibility algorithms for x-ray CT

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob S.; Pan, Xiaochuan

    2013-01-01

    Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution—thereby facilitating the IIR algorithm design process. Methods: An accelerated version of the Chambolle−Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. Results: The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144°. The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application. PMID:23464295

  19. Numerical optimization methods for controlled systems with parameters

    NASA Astrophysics Data System (ADS)

    Tyatyushkin, A. I.

    2017-10-01

    First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.

  20. Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2005-01-01

    We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.

  1. Electric train energy consumption modeling

    DOE PAGES

    Wang, Jinghui; Rakha, Hesham A.

    2017-05-01

    For this paper we develop an electric train energy consumption modeling framework considering instantaneous regenerative braking efficiency in support of a rail simulation system. The model is calibrated with data from Portland, Oregon using an unconstrained non-linear optimization procedure, and validated using data from Chicago, Illinois by comparing model predictions against the National Transit Database (NTD) estimates. The results demonstrate that regenerative braking efficiency varies as an exponential function of the deceleration level, rather than an average constant as assumed in previous studies. The model predictions are demonstrated to be consistent with the NTD estimates, producing a predicted error ofmore » 1.87% and -2.31%. The paper demonstrates that energy recovery reduces the overall power consumption by 20% for the tested Chicago route. Furthermore, the paper demonstrates that the proposed modeling approach is able to capture energy consumption differences associated with train, route and operational parameters, and thus is applicable for project-level analysis. The model can be easily implemented in traffic simulation software, used in smartphone applications and eco-transit programs given its fast execution time and easy integration in complex frameworks.« less

  2. Electric train energy consumption modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jinghui; Rakha, Hesham A.

    For this paper we develop an electric train energy consumption modeling framework considering instantaneous regenerative braking efficiency in support of a rail simulation system. The model is calibrated with data from Portland, Oregon using an unconstrained non-linear optimization procedure, and validated using data from Chicago, Illinois by comparing model predictions against the National Transit Database (NTD) estimates. The results demonstrate that regenerative braking efficiency varies as an exponential function of the deceleration level, rather than an average constant as assumed in previous studies. The model predictions are demonstrated to be consistent with the NTD estimates, producing a predicted error ofmore » 1.87% and -2.31%. The paper demonstrates that energy recovery reduces the overall power consumption by 20% for the tested Chicago route. Furthermore, the paper demonstrates that the proposed modeling approach is able to capture energy consumption differences associated with train, route and operational parameters, and thus is applicable for project-level analysis. The model can be easily implemented in traffic simulation software, used in smartphone applications and eco-transit programs given its fast execution time and easy integration in complex frameworks.« less

  3. A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Ortiz, Francisco

    2004-01-01

    COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.

  4. NMA Analysis Center

    NASA Technical Reports Server (NTRS)

    Kierulf, Halfdan Pascal; Andersen, Per Helge

    2013-01-01

    The Norwegian Mapping Authority (NMA) has during the last few years had a close cooperation with Norwegian Defence Research Establishment (FFI) in the analysis of space geodetic data using the GEOSAT software. In 2012 NMA has taken over the full responsibility for the GEOSAT software. This implies that FFI stopped being an IVS Associate Analysis Center in 2012. NMA has been an IVS Associate Analysis Center since 28 October 2010. NMA's contributions to the IVS as an Analysis Centers focus primarily on routine production of session-by-session unconstrained and consistent normal equations by GEOSAT as input to the IVS combined solution. After the recent improvements, we expect that VLBI results produced with GEOSAT will be consistent with results from the other VLBI Analysis Centers to a satisfactory level.

  5. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows

    PubMed Central

    Wang, Di; Kleinberg, Robert D.

    2009-01-01

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596

  6. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.

    PubMed

    Wang, Di; Kleinberg, Robert D

    2009-11-28

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.

  7. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.

  8. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.

  9. Partial differential equations constrained combinatorial optimization on an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh

    Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.

  10. The cost of noise reduction in commercial tilt rotor aircraft

    NASA Technical Reports Server (NTRS)

    Faulkner, H. B.

    1974-01-01

    The relationship between direct operating cost (DOC) and departure noise annoyance was developed for commercial tilt rotor aircraft. This was accomplished by generating a series of tilt rotor aircraft designs to meet various noise goals at minimum DOC. These vehicles were spaced across the spectrum of possible noise levels from completely unconstrained to the quietest vehicle that could be designed within the study ground rules. A group of optimization parameters were varied to find the minimum DOC while other inputs were held constant and some external constraints were met. This basic variation was then extended to different aircraft sizes and technology time frames. It was concluded that reducing noise annoyance by designing for lower rotor tip speeds is a very promising avenue for future research and development. It appears that the cost of halving the annoyance compared to an unconstrained design is insignificant and the cost of halving the annoyance again is small.

  11. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  12. Computational alternatives to obtain time optimal jet engine control. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Basso, R. J.; Leake, R. J.

    1976-01-01

    Two computational methods to determine an open loop time optimal control sequence for a simple single spool turbojet engine are described by a set of nonlinear differential equations. Both methods are modifications of widely accepted algorithms which can solve fixed time unconstrained optimal control problems with a free right end. Constrained problems to be considered have fixed right ends and free time. Dynamic programming is defined on a standard problem and it yields a successive approximation solution to the time optimal problem of interest. A feedback control law is obtained and it is then used to determine the corresponding open loop control sequence. The Fletcher-Reeves conjugate gradient method has been selected for adaptation to solve a nonlinear optimal control problem with state variable and control constraints.

  13. Structural optimization with approximate sensitivities

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.

    1994-01-01

    Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.

  14. New displacement-based methods for optimal truss topology design

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.

    1991-01-01

    Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.

  15. An Optimization Framework for Dynamic Hybrid Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenbo Du; Humberto E Garcia; Christiaan J.J. Paredis

    A computational framework for the efficient analysis and optimization of dynamic hybrid energy systems (HES) is developed. A microgrid system with multiple inputs and multiple outputs (MIMO) is modeled using the Modelica language in the Dymola environment. The optimization loop is implemented in MATLAB, with the FMI Toolbox serving as the interface between the computational platforms. Two characteristic optimization problems are selected to demonstrate the methodology and gain insight into the system performance. The first is an unconstrained optimization problem that optimizes the dynamic properties of the battery, reactor and generator to minimize variability in the HES. The second problemmore » takes operating and capital costs into consideration by imposing linear and nonlinear constraints on the design variables. The preliminary optimization results obtained in this study provide an essential step towards the development of a comprehensive framework for designing HES.« less

  16. Adiabatic Quantum Computing with Neutral Atoms

    NASA Astrophysics Data System (ADS)

    Hankin, Aaron; Biedermann, Grant; Burns, George; Jau, Yuan-Yu; Johnson, Cort; Kemme, Shanalyn; Landahl, Andrew; Mangan, Michael; Parazzoli, L. Paul; Schwindt, Peter; Armstrong, Darrell

    2012-06-01

    We are developing, both theoretically and experimentally, a neutral atom qubit approach to adiabatic quantum computation. Using our microfabricated diffractive optical elements, we plan to implement an array of optical traps for cesium atoms and use Rydberg-dressed ground states to provide a controlled atom-atom interaction. We will develop this experimental capability to generate a two-qubit adiabatic evolution aimed specifically toward demonstrating the two-qubit quadratic unconstrained binary optimization (QUBO) routine.

  17. Constrained minimization of smooth functions using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  18. Guided particle swarm optimization method to solve general nonlinear optimization problems

    NASA Astrophysics Data System (ADS)

    Abdelhalim, Alyaa; Nakata, Kazuhide; El-Alem, Mahmoud; Eltawil, Amr

    2018-04-01

    The development of hybrid algorithms is becoming an important topic in the global optimization research area. This article proposes a new technique in hybridizing the particle swarm optimization (PSO) algorithm and the Nelder-Mead (NM) simplex search algorithm to solve general nonlinear unconstrained optimization problems. Unlike traditional hybrid methods, the proposed method hybridizes the NM algorithm inside the PSO to improve the velocities and positions of the particles iteratively. The new hybridization considers the PSO algorithm and NM algorithm as one heuristic, not in a sequential or hierarchical manner. The NM algorithm is applied to improve the initial random solution of the PSO algorithm and iteratively in every step to improve the overall performance of the method. The performance of the proposed method was tested over 20 optimization test functions with varying dimensions. Comprehensive comparisons with other methods in the literature indicate that the proposed solution method is promising and competitive.

  19. VDLLA: A virtual daddy-long legs optimization

    NASA Astrophysics Data System (ADS)

    Yaakub, Abdul Razak; Ghathwan, Khalil I.

    2016-08-01

    Swarm intelligence is a strong optimization algorithm based on a biological behavior of insects or animals. The success of any optimization algorithm is depending on the balance between exploration and exploitation. In this paper, we present a new swarm intelligence algorithm, which is based on daddy long legs spider (VDLLA) as a new optimization algorithm with virtual behavior. In VDLLA, each agent (spider) has nine positions which represent the legs of spider and each position represent one solution. The proposed VDLLA is tested on four standard functions using average fitness, Medium fitness and standard deviation. The results of proposed VDLLA have been compared against Particle Swarm Optimization (PSO), Differential Evolution (DE) and Bat Inspired Algorithm (BA). Additionally, the T-Test has been conducted to show the significant deference between our proposed and other algorithms. VDLLA showed very promising results on benchmark test functions for unconstrained optimization problems and also significantly improved the original swarm algorithms.

  20. Analytical investigations in aircraft and spacecraft trajectory optimization and optimal guidance

    NASA Technical Reports Server (NTRS)

    Markopoulos, Nikos; Calise, Anthony J.

    1995-01-01

    A collection of analytical studies is presented related to unconstrained and constrained aircraft (a/c) energy-state modeling and to spacecraft (s/c) motion under continuous thrust. With regard to a/c unconstrained energy-state modeling, the physical origin of the singular perturbation parameter that accounts for the observed 2-time-scale behavior of a/c during energy climbs is identified and explained. With regard to the constrained energy-state modeling, optimal control problems are studied involving active state-variable inequality constraints. Departing from the practical deficiencies of the control programs for such problems that result from the traditional formulations, a complete reformulation is proposed for these problems which, in contrast to the old formulation, will presumably lead to practically useful controllers that can track an inequality constraint boundary asymptotically, and even in the presence of 2-sided perturbations about it. Finally, with regard to s/c motion under continuous thrust, a thrust program is proposed for which the equations of 2-dimensional motion of a space vehicle in orbit, viewed as a point mass, afford an exact analytic solution. The thrust program arises under the assumption of tangential thrust from the costate system corresponding to minimum-fuel, power-limited, coplanar transfers between two arbitrary conics. The thrust program can be used not only with power-limited propulsion systems, but also with any propulsion system capable of generating continuous thrust of controllable magnitude, and, for propulsion types and classes of transfers for which it is sufficiently optimal the results of this report suggest a method of maneuvering during planetocentric or heliocentric orbital operations, requiring a minimum amount of computation; thus uniquely suitable for real-time feedback guidance implementations.

  1. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least-squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on an in-house object-oriented optimization tool. During the numerical optimization procedure, a design jig-shape is determined by the baseline jig-shape and basis functions. A total of 12 symmetric mode shapes of the cruise-weight configuration, rigid pitch shape, rigid left and right stabilator rotation shapes, and a residual shape are selected as sixteen basis functions. After three optimization runs, the trim shape error distribution is improved, and the maximum trim shape error of 0.9844 inches of the starting configuration becomes 0.00367 inch by the end of the third optimization run.

  2. Automatic multiple zebrafish larvae tracking in unconstrained microscopic video conditions.

    PubMed

    Wang, Xiaoying; Cheng, Eva; Burnett, Ian S; Huang, Yushi; Wlodkowic, Donald

    2017-12-14

    The accurate tracking of zebrafish larvae movement is fundamental to research in many biomedical, pharmaceutical, and behavioral science applications. However, the locomotive characteristics of zebrafish larvae are significantly different from adult zebrafish, where existing adult zebrafish tracking systems cannot reliably track zebrafish larvae. Further, the far smaller size differentiation between larvae and the container render the detection of water impurities inevitable, which further affects the tracking of zebrafish larvae or require very strict video imaging conditions that typically result in unreliable tracking results for realistic experimental conditions. This paper investigates the adaptation of advanced computer vision segmentation techniques and multiple object tracking algorithms to develop an accurate, efficient and reliable multiple zebrafish larvae tracking system. The proposed system has been tested on a set of single and multiple adult and larvae zebrafish videos in a wide variety of (complex) video conditions, including shadowing, labels, water bubbles and background artifacts. Compared with existing state-of-the-art and commercial multiple organism tracking systems, the proposed system improves the tracking accuracy by up to 31.57% in unconstrained video imaging conditions. To facilitate the evaluation on zebrafish segmentation and tracking research, a dataset with annotated ground truth is also presented. The software is also publicly accessible.

  3. Evaluation of Advanced Thermal Protection Techniques for Future Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Olds, John R.; Cowart, Kris

    2001-01-01

    A method for integrating Aeroheating analysis into conceptual reusable launch vehicle RLV design is presented in this thesis. This process allows for faster turn-around time to converge a RLV design through the advent of designing an optimized thermal protection system (TPS). It consists of the coupling and automation of four computer software packages: MINIVER, TPSX, TCAT and ADS. MINIVER is an Aeroheating code that produces centerline radiation equilibrium temperatures, convective heating rates, and heat loads over simplified vehicle geometries. These include flat plates and swept cylinders that model wings and leading edges, respectively. TPSX is a NASA Ames material properties database that is available on the World Wide Web. The newly developed Thermal Calculation Analysis Tool (TCAT) uses finite difference methods to carry out a transient in-depth I-D conduction analysis over the center mold line of the vehicle. This is used along with the Automated Design Synthesis (ADS) code to correctly size the vehicle's thermal protection system JPS). The numerical optimizer ADS uses algorithms that solve constrained and unconstrained design problems. The resulting outputs for this process are TPS material types, unit thicknesses, and acreage percentages. TCAT was developed for several purposes. First, it provides a means to calculate the transient in-depth conduction seen by the surface of the TPS material that protects a vehicle during ascent and reentry. Along with the in-depth conduction, radiation from the surface of the material is calculated along with the temperatures at the backface and interior parts of the TPS material. Secondly, TCAT contributes added speed and automation to the overall design process. Another motivation in the development of TCAT is optimization.

  4. Development of a Platform for Simulating and Optimizing Thermoelectric Energy Systems

    NASA Astrophysics Data System (ADS)

    Kreuder, John J.

    Thermoelectrics are solid state devices that can convert thermal energy directly into electrical energy. They have historically been used only in niche applications because of their relatively low efficiencies. With the advent of nanotechnology and improved manufacturing processes thermoelectric materials have become less costly and more efficient As next generation thermoelectric materials become available there is a need for industries to quickly and cost effectively seek out feasible applications for thermoelectric heat recovery platforms. Determining the technical and economic feasibility of such systems requires a model that predicts performance at the system level. Current models focus on specific system applications or neglect the rest of the system altogether, focusing on only module design and not an entire energy system. To assist in screening and optimizing entire energy systems using thermoelectrics, a novel software tool, Thermoelectric Power System Simulator (TEPSS), is developed for system level simulation and optimization of heat recovery systems. The platform is designed for use with a generic energy system so that most types of thermoelectric heat recovery applications can be modeled. TEPSS is based on object-oriented programming in MATLABRTM. A modular, shell based architecture is developed to carry out concept generation, system simulation and optimization. Systems are defined according to the components and interconnectivity specified by the user. An iterative solution process based on Newton's Method is employed to determine the system's steady state so that an objective function representing the cost of the system can be evaluated at the operating point. An optimization algorithm from MATLAB's Optimization Toolbox uses sequential quadratic programming to minimize this objective function with respect to a set of user specified design variables and constraints. During this iterative process many independent system simulations are executed and the optimal operating condition of the system is determined. A comprehensive guide to using the software platform is included. TEPSS is intended to be expandable so that users can add new types of components and implement component models with an adequate degree of complexity for a required application. Special steps are taken to ensure that the system of nonlinear algebraic equations presented in the system engineering model is square and that all equations are independent. In addition, the third party program FluidProp is leveraged to allow for simulations of systems with a range of fluids. Sequential unconstrained minimization techniques are used to prevent physical variables like pressure and temperature from trending to infinity during optimization. Two case studies are performed to verify and demonstrate the simulation and optimization routines employed by TEPSS. The first is of a simple combined cycle in which the size of the heat exchanger and fuel rate are optimized. The second case study is the optimization of geometric parameters of a thermoelectric heat recovery platform in a regenerative Brayton Cycle. A basic package of components and interconnections are verified and provided as well.

  5. Metabolic flux estimation using particle swarm optimization with penalty function.

    PubMed

    Long, Hai-Xia; Xu, Wen-Bo; Sun, Jun

    2009-01-01

    Metabolic flux estimation through 13C trace experiment is crucial for quantifying the intracellular metabolic fluxes. In fact, it corresponds to a constrained optimization problem that minimizes a weighted distance between measured and simulated results. In this paper, we propose particle swarm optimization (PSO) with penalty function to solve 13C-based metabolic flux estimation problem. The stoichiometric constraints are transformed to an unconstrained one, by penalizing the constraints and building a single objective function, which in turn is minimized using PSO algorithm for flux quantification. The proposed algorithm is applied to estimate the central metabolic fluxes of Corynebacterium glutamicum. From simulation results, it is shown that the proposed algorithm has superior performance and fast convergence ability when compared to other existing algorithms.

  6. Public domain optical character recognition

    NASA Astrophysics Data System (ADS)

    Garris, Michael D.; Blue, James L.; Candela, Gerald T.; Dimmick, Darrin L.; Geist, Jon C.; Grother, Patrick J.; Janet, Stanley A.; Wilson, Charles L.

    1995-03-01

    A public domain document processing system has been developed by the National Institute of Standards and Technology (NIST). The system is a standard reference form-based handprint recognition system for evaluating optical character recognition (OCR), and it is intended to provide a baseline of performance on an open application. The system's source code, training data, performance assessment tools, and type of forms processed are all publicly available. The system recognizes the handprint entered on handwriting sample forms like the ones distributed with NIST Special Database 1. From these forms, the system reads hand-printed numeric fields, upper and lowercase alphabetic fields, and unconstrained text paragraphs comprised of words from a limited-size dictionary. The modular design of the system makes it useful for component evaluation and comparison, training and testing set validation, and multiple system voting schemes. The system contains a number of significant contributions to OCR technology, including an optimized probabilistic neural network (PNN) classifier that operates a factor of 20 times faster than traditional software implementations of the algorithm. The source code for the recognition system is written in C and is organized into 11 libraries. In all, there are approximately 19,000 lines of code supporting more than 550 subroutines. Source code is provided for form registration, form removal, field isolation, field segmentation, character normalization, feature extraction, character classification, and dictionary-based postprocessing. The recognition system has been successfully compiled and tested on a host of UNIX workstations. This paper gives an overview of the recognition system's software architecture, including descriptions of the various system components along with timing and accuracy statistics.

  7. Global Optimization of Interplanetary Trajectories in the Presence of Realistic Mission Contraints

    NASA Technical Reports Server (NTRS)

    Hinckley, David, Jr.; Englander, Jacob; Hitt, Darren

    2015-01-01

    Interplanetary missions are often subject to difficult constraints, like solar phase angle upon arrival at the destination, velocity at arrival, and altitudes for flybys. Preliminary design of such missions is often conducted by solving the unconstrained problem and then filtering away solutions which do not naturally satisfy the constraints. However this can bias the search into non-advantageous regions of the solution space, so it can be better to conduct preliminary design with the full set of constraints imposed. In this work two stochastic global search methods are developed which are well suited to the constrained global interplanetary trajectory optimization problem.

  8. [Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].

    PubMed

    Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing

    2003-12-01

    Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.

  9. Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure

    NASA Astrophysics Data System (ADS)

    Bogani, C.; Gasparo, M. G.; Papini, A.

    2009-07-01

    We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.

  10. NEWSUMT: A FORTRAN program for inequality constrained function minimization, users guide

    NASA Technical Reports Server (NTRS)

    Miura, H.; Schmit, L. A., Jr.

    1979-01-01

    A computer program written in FORTRAN subroutine form for the solution of linear and nonlinear constrained and unconstrained function minimization problems is presented. The algorithm is the sequence of unconstrained minimizations using the Newton's method for unconstrained function minimizations. The use of NEWSUMT and the definition of all parameters are described.

  11. Optimally stopped variational quantum algorithms

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Shabani, Alireza

    2018-04-01

    Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.

  12. Research on design method of the full form ship with minimum thrust deduction factor

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-ji; Miao, Ai-qin; Zhang, Zhu-xin

    2015-04-01

    In the preliminary design stage of the full form ships, in order to obtain a hull form with low resistance and maximum propulsion efficiency, an optimization design program for a full form ship with the minimum thrust deduction factor has been developed, which combined the potential flow theory and boundary layer theory with the optimization technique. In the optimization process, the Sequential Unconstrained Minimization Technique (SUMT) interior point method of Nonlinear Programming (NLP) was proposed with the minimum thrust deduction factor as the objective function. An appropriate displacement is a basic constraint condition, and the boundary layer separation is an additional one. The parameters of the hull form modification function are used as design variables. At last, the numerical optimization example for lines of after-body of 50000 DWT product oil tanker was provided, which indicated that the propulsion efficiency was improved distinctly by this optimal design method.

  13. Optimizing an Actuator Array for the Control of Multi-Frequency Noise in Aircraft Interiors

    NASA Technical Reports Server (NTRS)

    Palumbo, D. L.; Padula, S. L.

    1997-01-01

    Techniques developed for selecting an optimized actuator array for interior noise reduction at a single frequency are extended to the multi-frequency case. Transfer functions for 64 actuators were obtained at 5 frequencies from ground testing the rear section of a fully trimmed DC-9 fuselage. A single loudspeaker facing the left side of the aircraft was the primary source. A combinatorial search procedure (tabu search) was employed to find optimum actuator subsets of from 2 to 16 actuators. Noise reduction predictions derived from the transfer functions were used as a basis for evaluating actuator subsets during optimization. Results indicate that it is necessary to constrain actuator forces during optimization. Unconstrained optimizations selected actuators which require unrealistically large forces. Two methods of constraint are evaluated. It is shown that a fast, but approximate, method yields results equivalent to an accurate, but computationally expensive, method.

  14. Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  15. Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Watkins, Edward Francis

    1995-01-01

    A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.

  16. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  17. A new smoothing modified three-term conjugate gradient method for [Formula: see text]-norm minimization problem.

    PubMed

    Du, Shouqiang; Chen, Miao

    2018-01-01

    We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.

  18. Optimal design of solidification processes

    NASA Technical Reports Server (NTRS)

    Dantzig, Jonathan A.; Tortorelli, Daniel A.

    1991-01-01

    An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.

  19. Global Optimization of Low-Thrust Interplanetary Trajectories Subject to Operational Constraints

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.; Hinckley, David

    2016-01-01

    Low-thrust interplanetary space missions are highly complex and there can be many locally optimal solutions. While several techniques exist to search for globally optimal solutions to low-thrust trajectory design problems, they are typically limited to unconstrained trajectories. The operational design community in turn has largely avoided using such techniques and has primarily focused on accurate constrained local optimization combined with grid searches and intuitive design processes at the expense of efficient exploration of the global design space. This work is an attempt to bridge the gap between the global optimization and operational design communities by presenting a mathematical framework for global optimization of low-thrust trajectories subject to complex constraints including the targeting of planetary landing sites, a solar range constraint to simplify the thermal design of the spacecraft, and a real-world multi-thruster electric propulsion system that must switch thrusters on and off as available power changes over the course of a mission.

  20. Digital Image Restoration Under a Regression Model - The Unconstrained, Linear Equality and Inequality Constrained Approaches

    DTIC Science & Technology

    1974-01-01

    REGRESSION MODEL - THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January 1974 Nelson Delfino d’Avila Mascarenha;? Image...Report 520 DIGITAL IMAGE RESTORATION UNDER A REGRESSION MODEL THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January...a two- dimensional form adequately describes the linear model . A dis- cretization is performed by using quadrature methods. By trans

  1. ADS: A FORTRAN program for automated design synthesis: Version 1.10

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1985-01-01

    A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.

  2. Optimum structural design with static aeroelastic constraints

    NASA Technical Reports Server (NTRS)

    Bowman, Keith B; Grandhi, Ramana V.; Eastep, F. E.

    1989-01-01

    The static aeroelastic performance characteristics, divergence velocity, control effectiveness and lift effectiveness are considered in obtaining an optimum weight structure. A typical swept wing structure is used with upper and lower skins, spar and rib thicknesses, and spar cap and vertical post cross-sectional areas as the design parameters. Incompressible aerodynamic strip theory is used to derive the constraint formulations, and aerodynamic load matrices. A Sequential Unconstrained Minimization Technique (SUMT) algorithm is used to optimize the wing structure to meet the desired performance constraints.

  3. Optimal Aerodynamic Design of Conventional and Coaxial Helicopter Rotors in Hover and Forward Flight

    DTIC Science & Technology

    2015-12-28

    graduate career a fun and (at times) productive pursuit. I owe a great deal to my parents , Kevin and Lisa, for their unconditional support. Finally...forward flight. Orchard and Newman [6] investigated fundamental design features of compound helicopters using a wing, a single rotor, and a propul- sor... style compound. For the case considered here, the coaxial rotors are unconstrained in lift offset. If a wing were used in a case that also included a lift

  4. Proceedings of the Quantum Computation for Physical Modeling Workshop 2004. Held in North Falmouth, MA on 12-15 September 2004

    DTIC Science & Technology

    2005-10-01

    late the difficulty of some basic 1-bit and n-bit quantum and classical operations in an simple unconstrained scenario. KEY WORDS: Time evolution... quantum circuit and design are presented for an optimized entangling probe attacking the BB84 Protocol of quantum key distribution (QKD) and yielding...unambiguous, at least some of the time. It follows that the BB84 (Bennett-Brassard 1984) proto- col of quantum key distribution has a vulnerability similar to

  5. Conjugate gradient determination of optimal plane changes for a class of three-impulse transfers between noncoplanar circular orbits

    NASA Technical Reports Server (NTRS)

    Burrows, R. R.

    1972-01-01

    A particular type of three-impulse transfer between two circular orbits is analyzed. The possibility of three plane changes is recognized, and the problem is to optimally distribute these plane changes to minimize the sum of the individual impulses. Numerical difficulties and their solution are discussed. Numerical results obtained from a conjugate gradient technique are presented for both the case where the individual plane changes are unconstrained and for the case where they are constrained. Possibly not unexpectedly, multiple minima are found. The techniques presented could be extended to the finite burn case, but primarily the contents are addressed to preliminary mission design and vehicle sizing.

  6. A cubic extended interior penalty function for structural optimization

    NASA Technical Reports Server (NTRS)

    Prasad, B.; Haftka, R. T.

    1979-01-01

    This paper describes an optimization procedure for the minimum weight design of complex structures. The procedure is based on a new cubic extended interior penalty function (CEIPF) used with the sequence of unconstrained minimization technique (SUMT) and Newton's method. The Hessian matrix of the penalty function is approximated using only constraints and their derivatives. The CEIPF is designed to minimize the error in the approximation of the Hessian matrix, and as a result the number of structural analyses required is small and independent of the number of design variables. Three example problems are reported. The number of structural analyses is reduced by as much as 50 per cent below previously reported results.

  7. Simple wavefront correction framework for two-photon microscopy of in-vivo brain

    PubMed Central

    Galwaduge, P. T.; Kim, S. H.; Grosberg, L. E.; Hillman, E. M. C.

    2015-01-01

    We present an easily implemented wavefront correction scheme that has been specifically designed for in-vivo brain imaging. The system can be implemented with a single liquid crystal spatial light modulator (LCSLM), which makes it compatible with existing patterned illumination setups, and provides measurable signal improvements even after a few seconds of optimization. The optimization scheme is signal-based and does not require exogenous guide-stars, repeated image acquisition or beam constraint. The unconstrained beam approach allows the use of Zernike functions for aberration correction and Hadamard functions for scattering correction. Low order corrections performed in mouse brain were found to be valid up to hundreds of microns away from the correction location. PMID:26309763

  8. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    NASA Astrophysics Data System (ADS)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  9. Constrained Laboratory vs. Unconstrained Steering-Induced Rollover Crash Tests.

    PubMed

    Kerrigan, Jason R; Toczyski, Jacek; Roberts, Carolyn; Zhang, Qi; Clauser, Mark

    2015-01-01

    The goal of this study was to evaluate how well an in-laboratory rollover crash test methodology that constrains vehicle motion can reproduce the dynamics of unconstrained full-scale steering-induced rollover crash tests in sand. Data from previously-published unconstrained steering-induced rollover crash tests using a full-size pickup and mid-sized sedan were analyzed to determine vehicle-to-ground impact conditions and kinematic response of the vehicles throughout the tests. Then, a pair of replicate vehicles were prepared to match the inertial properties of the steering-induced test vehicles and configured to record dynamic roof structure deformations and kinematic response. Both vehicles experienced greater increases in roll-axis angular velocities in the unconstrained tests than in the constrained tests; however, the increases that occurred during the trailing side roof interaction were nearly identical between tests for both vehicles. Both vehicles experienced linear accelerations in the constrained tests that were similar to those in the unconstrained tests, but the pickup, in particular, had accelerations that were matched in magnitude, timing, and duration very closely between the two test types. Deformations in the truck test were higher in the constrained than the unconstrained, and deformations in the sedan were greater in the unconstrained than the constrained as a result of constraints of the test fixture, and differences in impact velocity for the trailing side. The results of the current study suggest that in-laboratory rollover tests can be used to simulate the injury-causing portions of unconstrained rollover crashes. To date, such a demonstration has not yet been published in the open literature. This study did, however, show that road surface can affect vehicle response in a way that may not be able to be mimicked in the laboratory. Lastly, this study showed that configuring the in-laboratory tests to match the leading-side touchdown conditions could result in differences in the trailing side impact conditions.

  10. Constrained and Unconstrained Variational Finite Element Formulation of Solutions to a Stress Wave Problem - a Numerical Comparison.

    DTIC Science & Technology

    1982-10-01

    Element Unconstrained Variational Formulations," Innovativ’e Numerical Analysis For the Applied Engineering Science, R. P. Shaw, et at, Fitor...Initial Boundary Value of Gun Dynamics Solved by Finite Element Unconstrained Variational Formulations," Innovative Numerical Analysis For the Applied ... Engineering Science, R. P. Shaw, et al, Editors, University Press of Virginia, Charlottesville, pp. 733-741, 1980. 2 J. J. Wu, "Solutions to Initial

  11. Optimized Periocular Template Selection for Human Recognition

    PubMed Central

    Sa, Pankaj K.; Majhi, Banshidhar

    2013-01-01

    A novel approach for selecting a rectangular template around periocular region optimally potential for human recognition is proposed. A comparatively larger template of periocular image than the optimal one can be slightly more potent for recognition, but the larger template heavily slows down the biometric system by making feature extraction computationally intensive and increasing the database size. A smaller template, on the contrary, cannot yield desirable recognition though the smaller template performs faster due to low computation for feature extraction. These two contradictory objectives (namely, (a) to minimize the size of periocular template and (b) to maximize the recognition through the template) are aimed to be optimized through the proposed research. This paper proposes four different approaches for dynamic optimal template selection from periocular region. The proposed methods are tested on publicly available unconstrained UBIRISv2 and FERET databases and satisfactory results have been achieved. Thus obtained template can be used for recognition of individuals in an organization and can be generalized to recognize every citizen of a nation. PMID:23984370

  12. Newton's method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    More, J. J.; Sorensen, D. C.

    1982-02-01

    Newton's method plays a central role in the development of numerical techniques for optimization. In fact, most of the current practical methods for optimization can be viewed as variations on Newton's method. It is therefore important to understand Newton's method as an algorithm in its own right and as a key introduction to the most recent ideas in this area. One of the aims of this expository paper is to present and analyze two main approaches to Newton's method for unconstrained minimization: the line search approach and the trust region approach. The other aim is to present some of themore » recent developments in the optimization field which are related to Newton's method. In particular, we explore several variations on Newton's method which are appropriate for large scale problems, and we also show how quasi-Newton methods can be derived quite naturally from Newton's method.« less

  13. Bayesian Optimization Under Mixed Constraints with A Slack-Variable Augmented Lagrangian

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Picheny, Victor; Gramacy, Robert B.; Wild, Stefan M.

    An augmented Lagrangian (AL) can convert a constrained optimization problem into a sequence of simpler (e.g., unconstrained) problems, which are then usually solved with local solvers. Recently, surrogate-based Bayesian optimization (BO) sub-solvers have been successfully deployed in the AL framework for a more global search in the presence of inequality constraints; however, a drawback was that expected improvement (EI) evaluations relied on Monte Carlo. Here we introduce an alternative slack variable AL, and show that in this formulation the EI may be evaluated with library routines. The slack variables furthermore facilitate equality as well as inequality constraints, and mixtures thereof.more » We show our new slack “ALBO” compares favorably to the original. Its superiority over conventional alternatives is reinforced on several mixed constraint examples.« less

  14. On the Convergence Analysis of the Optimized Gradient Method.

    PubMed

    Kim, Donghwan; Fessler, Jeffrey A

    2017-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.

  15. On the Convergence Analysis of the Optimized Gradient Method

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2016-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707

  16. Analytical design of an industrial two-term controller for optimal regulatory control of open-loop unstable processes under operational constraints.

    PubMed

    Tchamna, Rodrigue; Lee, Moonyong

    2018-01-01

    This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Long-range interacting systems in the unconstrained ensemble.

    PubMed

    Latella, Ivan; Pérez-Madrid, Agustín; Campa, Alessandro; Casetti, Lapo; Ruffo, Stefano

    2017-01-01

    Completely open systems can exchange heat, work, and matter with the environment. While energy, volume, and number of particles fluctuate under completely open conditions, the equilibrium states of the system, if they exist, can be specified using the temperature, pressure, and chemical potential as control parameters. The unconstrained ensemble is the statistical ensemble describing completely open systems and the replica energy is the appropriate free energy for these control parameters from which the thermodynamics must be derived. It turns out that macroscopic systems with short-range interactions cannot attain equilibrium configurations in the unconstrained ensemble, since temperature, pressure, and chemical potential cannot be taken as a set of independent variables in this case. In contrast, we show that systems with long-range interactions can reach states of thermodynamic equilibrium in the unconstrained ensemble. To illustrate this fact, we consider a modification of the Thirring model and compare the unconstrained ensemble with the canonical and grand-canonical ones: The more the ensemble is constrained by fixing the volume or number of particles, the larger the space of parameters defining the equilibrium configurations.

  18. Reliability of the Achilles tendon tap reflex evoked during stance using a pendulum hammer.

    PubMed

    Mildren, Robyn L; Zaback, Martin; Adkin, Allan L; Frank, James S; Bent, Leah R

    2016-01-01

    The tendon tap reflex (T-reflex) is often evoked in relaxed muscles to assess spinal reflex circuitry. Factors contributing to reflex excitability are modulated to accommodate specific postural demands. Thus, there is a need to be able to assess this reflex in a state where spinal reflex circuitry is engaged in maintaining posture. The aim of this study was to determine whether a pendulum hammer could provide controlled stimuli to the Achilles tendon and evoke reliable muscle responses during normal stance. A second aim was to establish appropriate stimulus parameters for experimental use. Fifteen healthy young adults stood on a forceplate while taps were applied to the Achilles tendon under conditions in which postural sway was constrained (by providing centre of pressure feedback) or unconstrained (no feedback) from an invariant release angle (50°). Twelve participants repeated this testing approximately six months later. Within one experimental session, tap force and T-reflex amplitude were found to be reliable regardless of whether postural sway was constrained (tap force ICC=0.982; T-reflex ICC=0.979) or unconstrained (tap force ICC=0.968; T-reflex ICC=0.964). T-reflex amplitude was also reliable between experimental sessions (constrained ICC=0.894; unconstrained ICC=0.890). When a T-reflex recruitment curve was constructed, optimal mid-range responses were observed using a 50° release angle. These results demonstrate that reliable Achilles T-reflexes can be evoked in standing participants without the need to constrain posture. The pendulum hammer provides a simple method to allow researchers and clinicians to gather information about reflex circuitry in a state where it is involved in postural control. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Application of augmented-Lagrangian methods in meteorology: Comparison of different conjugate-gradient codes for large-scale minimization

    NASA Technical Reports Server (NTRS)

    Navon, I. M.

    1984-01-01

    A Lagrange multiplier method using techniques developed by Bertsekas (1982) was applied to solving the problem of enforcing simultaneous conservation of the nonlinear integral invariants of the shallow water equations on a limited area domain. This application of nonlinear constrained optimization is of the large dimensional type and the conjugate gradient method was found to be the only computationally viable method for the unconstrained minimization. Several conjugate-gradient codes were tested and compared for increasing accuracy requirements. Robustness and computational efficiency were the principal criteria.

  20. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.

    2001-01-01

    We completed the formulation of the smoothness penalty functional this past quarter. We used a simplified procedure for estimating the statistics of the FCA solution spectral coefficients from the results of the unconstrained, low-truncation FCA (stopping criterion) solutions. During the current reporting period we have completed the calculation of GEOS-2 model-equivalent brightness temperatures for the 6.7 micron and 11 micron window channels used in the GOES imagery for all 10 cases from August 1999. These were simulated using the AER-developed Optimal Spectral Sampling (OSS) model.

  1. Gaussian Mean Field Lattice Gas

    NASA Astrophysics Data System (ADS)

    Scoppola, Benedetto; Troiani, Alessio

    2018-03-01

    We study rigorously a lattice gas version of the Sherrington-Kirckpatrick spin glass model. In discrete optimization literature this problem is known as unconstrained binary quadratic programming and it belongs to the class NP-hard. We prove that the fluctuations of the ground state energy tend to vanish in the thermodynamic limit, and we give a lower bound of such ground state energy. Then we present a heuristic algorithm, based on a probabilistic cellular automaton, which seems to be able to find configurations with energy very close to the minimum, even for quite large instances.

  2. Adiabatic Quantum Computation with Neutral Cesium

    NASA Astrophysics Data System (ADS)

    Hankin, Aaron; Parazzoli, L.; Chou, Chin-Wen; Jau, Yuan-Yu; Burns, George; Young, Amber; Kemme, Shanalyn; Ferdinand, Andrew; Biedermann, Grant; Landahl, Andrew; Ivan H. Deutsch Collaboration; Mark Saffman Collaboration

    2013-05-01

    We are implementing a new platform for adiabatic quantum computation (AQC) based on trapped neutral atoms whose coupling is mediated by the dipole-dipole interactions of Rydberg states. Ground state cesium atoms are dressed by laser fields in a manner conditional on the Rydberg blockade mechanism, thereby providing the requisite entangling interactions. As a benchmark we study a Quadratic Unconstrained Binary Optimization (QUBO) problem whose solution is found in the ground state spin configuration of an Ising-like model. University of New Mexico: Ivan H. Deutsch, Tyler Keating, Krittika Goyal.

  3. Single-Camera-Based Method for Step Length Symmetry Measurement in Unconstrained Elderly Home Monitoring.

    PubMed

    Cai, Xi; Han, Guang; Song, Xin; Wang, Jinkuan

    2017-11-01

    single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc. single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc.

  4. QSPIN: A High Level Java API for Quantum Computing Experimentation

    NASA Technical Reports Server (NTRS)

    Barth, Tim

    2017-01-01

    QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.

  5. Supersonic civil airplane study and design: Performance and sonic boom

    NASA Technical Reports Server (NTRS)

    Cheung, Samson

    1995-01-01

    Since aircraft configuration plays an important role in aerodynamic performance and sonic boom shape, the configuration of the next generation supersonic civil transport has to be tailored to meet high aerodynamic performance and low sonic boom requirements. Computational fluid dynamics (CFD) can be used to design airplanes to meet these dual objectives. The work and results in this report are used to support NASA's High Speed Research Program (HSRP). CFD tools and techniques have been developed for general usages of sonic boom propagation study and aerodynamic design. Parallel to the research effort on sonic boom extrapolation, CFD flow solvers have been coupled with a numeric optimization tool to form a design package for aircraft configuration. This CFD optimization package has been applied to configuration design on a low-boom concept and an oblique all-wing concept. A nonlinear unconstrained optimizer for Parallel Virtual Machine has been developed for aerodynamic design and study.

  6. Performance evaluation of firefly algorithm with variation in sorting for non-linear benchmark problems

    NASA Astrophysics Data System (ADS)

    Umbarkar, A. J.; Balande, U. T.; Seth, P. D.

    2017-06-01

    The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.

  7. Time-optimal thermalization of single-mode Gaussian states

    NASA Astrophysics Data System (ADS)

    Carlini, Alberto; Mari, Andrea; Giovannetti, Vittorio

    2014-11-01

    We consider the problem of time-optimal control of a continuous bosonic quantum system subject to the action of a Markovian dissipation. In particular, we consider the case of a one-mode Gaussian quantum system prepared in an arbitrary initial state and which relaxes to the steady state due to the action of the dissipative channel. We assume that the unitary part of the dynamics is represented by Gaussian operations which preserve the Gaussian nature of the quantum state, i.e., arbitrary phase rotations, bounded squeezing, and unlimited displacements. In the ideal ansatz of unconstrained quantum control (i.e., when the unitary phase rotations, squeezing, and displacement of the mode can be performed instantaneously), we study how control can be optimized for speeding up the relaxation towards the fixed point of the dynamics and we analytically derive the optimal relaxation time. Our model has potential and interesting applications to the control of modes of electromagnetic radiation and of trapped levitated nanospheres.

  8. SEMIPARAMETRIC ZERO-INFLATED MODELING IN MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA)

    PubMed Central

    Liu, Hai; Ma, Shuangge; Kronmal, Richard; Chan, Kung-Sik

    2013-01-01

    We analyze the Agatston score of coronary artery calcium (CAC) from the Multi-Ethnic Study of Atherosclerosis (MESA) using semi-parametric zero-inflated modeling approach, where the observed CAC scores from this cohort consist of high frequency of zeroes and continuously distributed positive values. Both partially constrained and unconstrained models are considered to investigate the underlying biological processes of CAC development from zero to positive, and from small amount to large amount. Different from existing studies, a model selection procedure based on likelihood cross-validation is adopted to identify the optimal model, which is justified by comparative Monte Carlo studies. A shrinkaged version of cubic regression spline is used for model estimation and variable selection simultaneously. When applying the proposed methods to the MESA data analysis, we show that the two biological mechanisms influencing the initiation of CAC and the magnitude of CAC when it is positive are better characterized by an unconstrained zero-inflated normal model. Our results are significantly different from those in published studies, and may provide further insights into the biological mechanisms underlying CAC development in human. This highly flexible statistical framework can be applied to zero-inflated data analyses in other areas. PMID:23805172

  9. 3D shape recovery of smooth surfaces: dropping the fixed-viewpoint assumption.

    PubMed

    Moses, Yael; Shimshoni, Ilan

    2009-07-01

    We present a new method for recovering the 3D shape of a featureless smooth surface from three or more calibrated images illuminated by different light sources (three of them are independent). This method is unique in its ability to handle images taken from unconstrained perspective viewpoints and unconstrained illumination directions. The correspondence between such images is hard to compute and no other known method can handle this problem locally from a small number of images. Our method combines geometric and photometric information in order to recover dense correspondence between the images and accurately computes the 3D shape. Only a single pass starting at one point and local computation are used. This is in contrast to methods that use the occluding contours recovered from many images to initialize and constrain an optimization process. The output of our method can be used to initialize such processes. In the special case of fixed viewpoint, the proposed method becomes a new perspective photometric stereo algorithm. Nevertheless, the introduction of the multiview setup, self-occlusions, and regions close to the occluding boundaries are better handled, and the method is more robust to noise than photometric stereo. Experimental results are presented for simulated and real images.

  10. Optimal mistuning for enhanced aeroelastic stability of transonic fans

    NASA Technical Reports Server (NTRS)

    Hall, K. C.; Crawley, E. F.

    1983-01-01

    An inverse design procedure was developed for the design of a mistuned rotor. The design requirements are that the stability margin of the eigenvalues of the aeroelastic system be greater than or equal to some minimum stability margin, and that the mass added to each blade be positive. The objective was to achieve these requirements with a minimal amount of mistuning. Hence, the problem was posed as a constrained optimization problem. The constrained minimization problem was solved by the technique of mathematical programming via augmented Lagrangians. The unconstrained minimization phase of this technique was solved by the variable metric method. The bladed disk was modelled as being composed of a rigid disk mounted on a rigid shaft. Each of the blades were modelled with a single tosional degree of freedom.

  11. A computational algorithm for spacecraft control and momentum management

    NASA Technical Reports Server (NTRS)

    Dzielski, John; Bergmann, Edward; Paradiso, Joseph

    1990-01-01

    Developments in the area of nonlinear control theory have shown how coordinate changes in the state and input spaces of a dynamical system can be used to transform certain nonlinear differential equations into equivalent linear equations. These techniques are applied to the control of a spacecraft equipped with momentum exchange devices. An optimal control problem is formulated that incorporates a nonlinear spacecraft model. An algorithm is developed for solving the optimization problem using feedback linearization to transform to an equivalent problem involving a linear dynamical constraint and a functional approximation technique to solve for the linear dynamics in terms of the control. The original problem is transformed into an unconstrained nonlinear quadratic program that yields an approximate solution to the original problem. Two examples are presented to illustrate the results.

  12. Exploring quantum computing application to satellite data assimilation

    NASA Astrophysics Data System (ADS)

    Cheung, S.; Zhang, S. Q.

    2015-12-01

    This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.

  13. Design Optimization Toolkit: Users' Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLABmore » command window.« less

  14. A Method of Integrating Aeroheating into Conceptual Reusable Launch Vehicle Design: Evaluation of Advanced Thermal Protection Techniques for Future Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Olds, John R.; Cowart, Kris

    2001-01-01

    A method for integrating Aeroheating analysis into conceptual reusable launch vehicle (RLV) design is presented in this thesis. This process allows for faster turn-around time to converge a RLV design through the advent of designing an optimized thermal protection system (TPS). It consists of the coupling and automation of four computer software packages: MINIVER, TPSX, TCAT, and ADS. MINIVER is an Aeroheating code that produces centerline radiation equilibrium temperatures, convective heating rates, and heat loads over simplified vehicle geometries. These include flat plates and swept cylinders that model wings and leading edges, respectively. TPSX is a NASA Ames material properties database that is available on the World Wide Web. The newly developed Thermal Calculation Analysis Tool (TCAT) uses finite difference methods to carry out a transient in-depth 1-D conduction analysis over the center mold line of the vehicle. This is used along with the Automated Design Synthesis (ADS) code to correctly size the vehicle's thermal protection system (TPS). The numerical optimizer ADS uses algorithms that solve constrained and unconstrained design problems. The resulting outputs for this process are TPS material types, unit thicknesses, and acreage percentages. TCAT was developed for several purposes. First, it provides a means to calculate the transient in-depth conduction seen by the surface of the TPS material that protects a vehicle during ascent and reentry. Along with the in-depth conduction, radiation from the surface of the material is calculated along with the temperatures at the backface and interior parts of the TPS material. Secondly, TCAT contributes added speed and automation to the overall design process. Another motivation in the development of TCAT is optimization. In some vehicles, the TPS accounts for a high percentage of the overall vehicle dry weight. Optimizing the weight of the TPS will thereby lower the percentage of the dry weight accounted for by the TPS. Also, this will lower the cost of the TPS and the overall cost of the vehicle.

  15. Challenges of ambulatory physiological sensing.

    PubMed

    Healey, Jennifer

    2004-01-01

    Applications for ambulatory monitoring span the spectrum from fitness optimization to cardiac defibrillation. This range of applications is associated with a corresponding range of required detection accuracies and a range of inconvenience and discomfort that wearers are willing to tolerate. This paper describes a selection of physiological sensors and how they might best be worn in the unconstrained ambulatory environment to provide the most robust measurements and the greatest comfort to the wearer. Using wireless mobile computing devices, it will be possible to record, analyze and respond to changes in the wearers' physiological signals in real time using these sensors.

  16. The cost of noise reduction for departure and arrival operations of commercial tilt rotor aircraft

    NASA Technical Reports Server (NTRS)

    Faulkner, H. B.; Swan, W. M.

    1976-01-01

    The relationship between direct operating cost (DOC) and noise annoyance due to a departure and an arrival operation was developed for commercial tilt rotor aircraft. This was accomplished by generating a series of tilt rotor aircraft designs to meet various noise goals at minimum DOC. These vehicles ranged across the spectrum of possible noise levels from completely unconstrained to the quietest vehicles that could be designed within the study ground rules. Optimization parameters were varied to find the minimum DOC. This basic variation was then extended to different aircraft sizes and technology time frames.

  17. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  18. Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki

    2013-01-01

    A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.

  19. Development and comparison of advanced reduced-basis methods for the transient structural analysis of unconstrained structures

    NASA Technical Reports Server (NTRS)

    Mcgowan, David M.; Bostic, Susan W.; Camarda, Charles J.

    1993-01-01

    The development of two advanced reduced-basis methods, the force derivative method and the Lanczos method, and two widely used modal methods, the mode displacement method and the mode acceleration method, for transient structural analysis of unconstrained structures is presented. Two example structural problems are studied: an undamped, unconstrained beam subject to a uniformly distributed load which varies as a sinusoidal function of time and an undamped high-speed civil transport aircraft subject to a normal wing tip load which varies as a sinusoidal function of time. These example problems are used to verify the methods and to compare the relative effectiveness of each of the four reduced-basis methods for performing transient structural analyses on unconstrained structures. The methods are verified with a solution obtained by integrating directly the full system of equations of motion, and they are compared using the number of basis vectors required to obtain a desired level of accuracy and the associated computational times as comparison criteria.

  20. Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization.

    PubMed

    Jung, Sang-Kyu; McDonald, Karen

    2011-08-16

    Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net.

  1. Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization

    PubMed Central

    2011-01-01

    Background Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. Results The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Conclusion Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net. PMID:21846353

  2. Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre

    2014-07-01

    We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.

  3. Improvements in GRACE Gravity Field Determination through Stochastic Observation Modeling

    NASA Astrophysics Data System (ADS)

    McCullough, C.; Bettadpur, S. V.

    2016-12-01

    Current unconstrained Release 05 GRACE gravity field solutions from the Center for Space Research (CSR RL05) assume random observation errors following an independent multivariate Gaussian distribution. This modeling of observations, a simplifying assumption, fails to account for long period, correlated errors arising from inadequacies in the background force models. Fully modeling the errors inherent in the observation equations, through the use of a full observation covariance (modeling colored noise), enables optimal combination of GPS and inter-satellite range-rate data and obviates the need for estimating kinematic empirical parameters during the solution process. Most importantly, fully modeling the observation errors drastically improves formal error estimates of the spherical harmonic coefficients, potentially enabling improved uncertainty quantification of scientific results derived from GRACE and optimizing combinations of GRACE with independent data sets and a priori constraints.

  4. Rear wheel torque vectoring model predictive control with velocity regulation for electric vehicles

    NASA Astrophysics Data System (ADS)

    Siampis, Efstathios; Velenis, Efstathios; Longo, Stefano

    2015-11-01

    In this paper we propose a constrained optimal control architecture for combined velocity, yaw and sideslip regulation for stabilisation of the vehicle near the limit of lateral acceleration using the rear axle electric torque vectoring configuration of an electric vehicle. A nonlinear vehicle and tyre model are used to find reference steady-state cornering conditions and design two model predictive control (MPC) strategies of different levels of fidelity: one that uses a linearised version of the full vehicle model with the rear wheels' torques as the input, and another one that neglects the wheel dynamics and uses the rear wheels' slips as the input instead. After analysing the relative trade-offs between performance and computational effort, we compare the two MPC strategies against each other and against an unconstrained optimal control strategy in Simulink and Carsim environment.

  5. EXADS - EXPERT SYSTEM FOR AUTOMATED DESIGN SYNTHESIS

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.

    1994-01-01

    The expert system called EXADS was developed to aid users of the Automated Design Synthesis (ADS) general purpose optimization program. Because of the general purpose nature of ADS, it is difficult for a nonexpert to select the best choice of strategy, optimizer, and one-dimensional search options from the one hundred or so combinations that are available. EXADS aids engineers in determining the best combination based on their knowledge of the problem and the expert knowledge previously stored by experts who developed ADS. EXADS is a customized application of the AESOP artificial intelligence program (the general version of AESOP is available separately from COSMIC. The ADS program is also available from COSMIC.) The expert system consists of two main components. The knowledge base contains about 200 rules and is divided into three categories: constrained, unconstrained, and constrained treated as unconstrained. The EXADS inference engine is rule-based and makes decisions about a particular situation using hypotheses (potential solutions), rules, and answers to questions drawn from the rule base. EXADS is backward-chaining, that is, it works from hypothesis to facts. The rule base was compiled from sources such as literature searches, ADS documentation, and engineer surveys. EXADS will accept answers such as yes, no, maybe, likely, and don't know, or a certainty factor ranging from 0 to 10. When any hypothesis reaches a confidence level of 90% or more, it is deemed as the best choice and displayed to the user. If no hypothesis is confirmed, the user can examine explanations of why the hypotheses failed to reach the 90% level. The IBM PC version of EXADS is written in IQ-LISP for execution under DOS 2.0 or higher with a central memory requirement of approximately 512K of 8 bit bytes. This program was developed in 1986.

  6. An adaptive evolutionary multi-objective approach based on simulated annealing.

    PubMed

    Li, H; Landa-Silva, D

    2011-01-01

    A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.

  7. Exploratory power of the harmony search algorithm: analysis and improvements for global numerical optimization.

    PubMed

    Das, Swagatam; Mukhopadhyay, Arpan; Roy, Anwit; Abraham, Ajith; Panigrahi, Bijaya K

    2011-02-01

    The theoretical analysis of evolutionary algorithms is believed to be very important for understanding their internal search mechanism and thus to develop more efficient algorithms. This paper presents a simple mathematical analysis of the explorative search behavior of a recently developed metaheuristic algorithm called harmony search (HS). HS is a derivative-free real parameter optimization algorithm, and it draws inspiration from the musical improvisation process of searching for a perfect state of harmony. This paper analyzes the evolution of the population-variance over successive generations in HS and thereby draws some important conclusions regarding the explorative power of HS. A simple but very useful modification to the classical HS has been proposed in light of the mathematical analysis undertaken here. A comparison with the most recently published variants of HS and four other state-of-the-art optimization algorithms over 15 unconstrained and five constrained benchmark functions reflects the efficiency of the modified HS in terms of final accuracy, convergence speed, and robustness.

  8. Impulsive time-free transfers between halo orbits

    NASA Astrophysics Data System (ADS)

    Hiday, L. A.; Howell, K. C.

    1992-08-01

    A methodology is developed to design optimal time-free impulsive transfers between three-dimensional halo orbits in the vicinity of the interior L1 libration point of the sun-earth/moon barycenter system. The transfer trajectories are optimal in the sense that the total characteristics velocity required to implement the transfer exhibits a local minimum. Criteria are established whereby the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The optimality of a reference two-impulse transfer can be determined by examining the slope at the endpoints of a plot of the magnitude of the primer vector on the reference trajectory. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The optimal time of flight on the time-free transfer, and consequently, the departure and arrival locations on the halo orbits are determined by the unconstrained minimization of a function of two variables using a multivariable search technique. Results indicate that the cost can be substantially diminished by the allowance for coasts in the initial and final libration-point orbits.

  9. Impulsive Time-Free Transfers Between Halo Orbits

    NASA Astrophysics Data System (ADS)

    Hiday-Johnston, L. A.; Howell, K. C.

    1996-12-01

    A methodology is developed to design optimal time-free impulsive transfers between three-dimensional halo orbits in the vicinity of the interior L 1 libration point of the Sun-Earth/Moon barycenter system. The transfer trajectories are optimal in the sense that the total characteristic velocity required to implement the transfer exhibits a local minimum. Criteria are established whereby the implementation of a coast in the initial orbit, a coast in the final orbit, or dual coasts accomplishes a reduction in fuel expenditure. The optimality of a reference two-impulse transfer can be determined by examining the slope at the endpoints of a plot of the magnitude of the primer vector on the reference trajectory. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The optimal time of flight on the time-free transfer, and consequently, the departure and arrival locations on the halo orbits are determined by the unconstrained minimization of a function of two variables using a multivariable search technique. Results indicate that the cost can be substantially diminished by the allowance for coasts in the initial and final libration-point orbits.

  10. Adaptive nearly optimal control for a class of continuous-time nonaffine nonlinear systems with inequality constraints.

    PubMed

    Fan, Quan-Yong; Yang, Guang-Hong

    2017-01-01

    The state inequality constraints have been hardly considered in the literature on solving the nonlinear optimal control problem based the adaptive dynamic programming (ADP) method. In this paper, an actor-critic (AC) algorithm is developed to solve the optimal control problem with a discounted cost function for a class of state-constrained nonaffine nonlinear systems. To overcome the difficulties resulting from the inequality constraints and the nonaffine nonlinearities of the controlled systems, a novel transformation technique with redesigned slack functions and a pre-compensator method are introduced to convert the constrained optimal control problem into an unconstrained one for affine nonlinear systems. Then, based on the policy iteration (PI) algorithm, an online AC scheme is proposed to learn the nearly optimal control policy for the obtained affine nonlinear dynamics. Using the information of the nonlinear model, novel adaptive update laws are designed to guarantee the convergence of the neural network (NN) weights and the stability of the affine nonlinear dynamics without the requirement for the probing signal. Finally, the effectiveness of the proposed method is validated by simulation studies. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Genetic algorithm-based multi-objective optimal absorber system for three-dimensional seismic structures

    NASA Astrophysics Data System (ADS)

    Ren, Wenjie; Li, Hongnan; Song, Gangbing; Huo, Linsheng

    2009-03-01

    The problem of optimizing an absorber system for three-dimensional seismic structures is addressed. The objective is to determine the number and position of absorbers to minimize the coupling effects of translation-torsion of structures at minimum cost. A procedure for a multi-objective optimization problem is developed by integrating a dominance-based selection operator and a dominance-based penalty function method. Based on the two-branch tournament genetic algorithm, the selection operator is constructed by evaluating individuals according to their dominance in one run. The technique guarantees the better performing individual winning its competition, provides a slight selection pressure toward individuals and maintains diversity in the population. Moreover, due to the evaluation for individuals in each generation being finished in one run, less computational effort is taken. Penalty function methods are generally used to transform a constrained optimization problem into an unconstrained one. The dominance-based penalty function contains necessary information on non-dominated character and infeasible position of an individual, essential for success in seeking a Pareto optimal set. The proposed approach is used to obtain a set of non-dominated designs for a six-storey three-dimensional building with shape memory alloy dampers subjected to earthquake.

  12. Global Convergence of the EM Algorithm for Unconstrained Latent Variable Models with Categorical Indicators

    ERIC Educational Resources Information Center

    Weissman, Alexander

    2013-01-01

    Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…

  13. Constrained and Unconstrained Partial Adjacent Category Logit Models for Ordinal Response Variables

    ERIC Educational Resources Information Center

    Fullerton, Andrew S.; Xu, Jun

    2018-01-01

    Adjacent category logit models are ordered regression models that focus on comparisons of adjacent categories. These models are particularly useful for ordinal response variables with categories that are of substantive interest. In this article, we consider unconstrained and constrained versions of the partial adjacent category logit model, which…

  14. Component-based integration of chemistry and optimization software.

    PubMed

    Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L

    2004-11-15

    Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.

  15. The optimization problems of CP operation

    NASA Astrophysics Data System (ADS)

    Kler, A. M.; Stepanova, E. L.; Maximov, A. S.

    2017-11-01

    The problem of enhancing energy and economic efficiency of CP is urgent indeed. One of the main methods for solving it is optimization of CP operation. To solve the optimization problems of CP operation, Energy Systems Institute, SB of RAS, has developed a software. The software makes it possible to make optimization calculations of CP operation. The software is based on the techniques and software tools of mathematical modeling and optimization of heat and power installations. Detailed mathematical models of new equipment have been developed in the work. They describe sufficiently accurately the processes that occur in the installations. The developed models include steam turbine models (based on the checking calculation) which take account of all steam turbine compartments and regeneration system. They also enable one to make calculations with regenerative heaters disconnected. The software for mathematical modeling of equipment and optimization of CP operation has been developed. It is based on the technique for optimization of CP operating conditions in the form of software tools and integrates them in the common user interface. The optimization of CP operation often generates the need to determine the minimum and maximum possible total useful electricity capacity of the plant at set heat loads of consumers, i.e. it is necessary to determine the interval on which the CP capacity may vary. The software has been applied to optimize the operating conditions of the Novo-Irkutskaya CP of JSC “Irkutskenergo”. The efficiency of operating condition optimization and the possibility for determination of CP energy characteristics that are necessary for optimization of power system operation are shown.

  16. The optimal community detection of software based on complex networks

    NASA Astrophysics Data System (ADS)

    Huang, Guoyan; Zhang, Peng; Zhang, Bing; Yin, Tengteng; Ren, Jiadong

    2016-02-01

    The community structure is important for software in terms of understanding the design patterns, controlling the development and the maintenance process. In order to detect the optimal community structure in the software network, a method Optimal Partition Software Network (OPSN) is proposed based on the dependency relationship among the software functions. First, by analyzing the information of multiple execution traces of one software, we construct Software Execution Dependency Network (SEDN). Second, based on the relationship among the function nodes in the network, we define Fault Accumulation (FA) to measure the importance of the function node and sort the nodes with measure results. Third, we select the top K(K=1,2,…) nodes as the core of the primal communities (only exist one core node). By comparing the dependency relationships between each node and the K communities, we put the node into the existing community which has the most close relationship. Finally, we calculate the modularity with different initial K to obtain the optimal division. With experiments, the method OPSN is verified to be efficient to detect the optimal community in various softwares.

  17. Programs for analysis and resizing of complex structures. [computerized minimum weight design

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Prasad, B.

    1978-01-01

    The paper describes the PARS (Programs for Analysis and Resizing of Structures) system. PARS is a user oriented system of programs for the minimum weight design of structures modeled by finite elements and subject to stress, displacement, flutter and thermal constraints. The system is built around SPAR - an efficient and modular general purpose finite element program, and consists of a series of processors that communicate through the use of a data base. An efficient optimizer based on the Sequence of Unconstrained Minimization Technique (SUMT) with an extended interior penalty function and Newton's method is used. Several problems are presented for demonstration of the system capabilities.

  18. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  19. PopED lite: An optimal design software for preclinical pharmacokinetic and pharmacodynamic studies.

    PubMed

    Aoki, Yasunori; Sundqvist, Monika; Hooker, Andrew C; Gennemark, Peter

    2016-04-01

    Optimal experimental design approaches are seldom used in preclinical drug discovery. The objective is to develop an optimal design software tool specifically designed for preclinical applications in order to increase the efficiency of drug discovery in vivo studies. Several realistic experimental design case studies were collected and many preclinical experimental teams were consulted to determine the design goal of the software tool. The tool obtains an optimized experimental design by solving a constrained optimization problem, where each experimental design is evaluated using some function of the Fisher Information Matrix. The software was implemented in C++ using the Qt framework to assure a responsive user-software interaction through a rich graphical user interface, and at the same time, achieving the desired computational speed. In addition, a discrete global optimization algorithm was developed and implemented. The software design goals were simplicity, speed and intuition. Based on these design goals, we have developed the publicly available software PopED lite (http://www.bluetree.me/PopED_lite). Optimization computation was on average, over 14 test problems, 30 times faster in PopED lite compared to an already existing optimal design software tool. PopED lite is now used in real drug discovery projects and a few of these case studies are presented in this paper. PopED lite is designed to be simple, fast and intuitive. Simple, to give many users access to basic optimal design calculations. Fast, to fit a short design-execution cycle and allow interactive experimental design (test one design, discuss proposed design, test another design, etc). Intuitive, so that the input to and output from the software tool can easily be understood by users without knowledge of the theory of optimal design. In this way, PopED lite is highly useful in practice and complements existing tools. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Unconstrained Structural Equation Models of Latent Interactions: Contrasting Residual- and Mean-Centered Approaches

    ERIC Educational Resources Information Center

    Marsh, Herbert W.; Wen, Zhonglin; Hau, Kit-Tai; Little, Todd D.; Bovaird, James A.; Widaman, Keith F.

    2007-01-01

    Little, Bovaird and Widaman (2006) proposed an unconstrained approach with residual centering for estimating latent interaction effects as an alternative to the mean-centered approach proposed by Marsh, Wen, and Hau (2004, 2006). Little et al. also differed from Marsh et al. in the number of indicators used to infer the latent interaction factor…

  1. Improved alignment evaluation and optimization : final report.

    DOT National Transportation Integrated Search

    2007-09-11

    This report outlines the development of an enhanced highway alignment evaluation and optimization : model. A GIS-based software tool is prepared for alignment optimization that uses genetic algorithms for : optimal search. The software is capable of ...

  2. Preconditioning strategies for nonlinear conjugate gradient methods, based on quasi-Newton updates

    NASA Astrophysics Data System (ADS)

    Andrea, Caliciotti; Giovanni, Fasano; Massimo, Roma

    2016-10-01

    This paper reports two proposals of possible preconditioners for the Nonlinear Conjugate Gradient (NCG) method, in large scale unconstrained optimization. On one hand, the common idea of our preconditioners is inspired to L-BFGS quasi-Newton updates, on the other hand we aim at explicitly approximating in some sense the inverse of the Hessian matrix. Since we deal with large scale optimization problems, we propose matrix-free approaches where the preconditioners are built using symmetric low-rank updating formulae. Our distinctive new contributions rely on using information on the objective function collected as by-product of the NCG, at previous iterations. Broadly speaking, our first approach exploits the secant equation, in order to impose interpolation conditions on the objective function. In the second proposal we adopt and ad hoc modified-secant approach, in order to possibly guarantee some additional theoretical properties.

  3. Integrated design optimization research and development in an industrial environment

    NASA Astrophysics Data System (ADS)

    Kumar, V.; German, Marjorie D.; Lee, S.-J.

    1989-04-01

    An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.

  4. Integrated design optimization research and development in an industrial environment

    NASA Technical Reports Server (NTRS)

    Kumar, V.; German, Marjorie D.; Lee, S.-J.

    1989-01-01

    An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.

  5. Utility of coupling nonlinear optimization methods with numerical modeling software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, M.J.

    1996-08-05

    Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less

  6. Still-to-video face recognition in unconstrained environments

    NASA Astrophysics Data System (ADS)

    Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing

    2015-02-01

    Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.

  7. Locating Critical Circular and Unconstrained Failure Surface in Slope Stability Analysis with Tailored Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Pasik, Tomasz; van der Meij, Raymond

    2017-12-01

    This article presents an efficient search method for representative circular and unconstrained slip surfaces with the use of the tailored genetic algorithm. Searches for unconstrained slip planes with rigid equilibrium methods are yet uncommon in engineering practice, and little publications regarding truly free slip planes exist. The proposed method presents an effective procedure being the result of the right combination of initial population type, selection, crossover and mutation method. The procedure needs little computational effort to find the optimum, unconstrained slip plane. The methodology described in this paper is implemented using Mathematica. The implementation, along with further explanations, is fully presented so the results can be reproduced. Sample slope stability calculations are performed for four cases, along with a detailed result interpretation. Two cases are compared with analyses described in earlier publications. The remaining two are practical cases of slope stability analyses of dikes in Netherlands. These four cases show the benefits of analyzing slope stability with a rigid equilibrium method combined with a genetic algorithm. The paper concludes by describing possibilities and limitations of using the genetic algorithm in the context of the slope stability problem.

  8. A "Reverse-Schur" Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design.

    PubMed

    Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.

  9. A “Reverse-Schur” Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design

    PubMed Central

    Bardhan, Jaydeep P.; Altman, Michael D.

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839

  10. Mitigating Short-Term Variations of Photovoltaic Generation Using Energy Storage with VOLTTRON

    NASA Astrophysics Data System (ADS)

    Morrissey, Kevin

    A smart-building communications system performs smoothing on photovoltaic (PV) power generation using a battery energy storage system (BESS). The system runs using VOLTTRON(TM), a multi-agent python-based software platform dedicated to power systems. The VOLTTRON(TM) system designed for this project runs synergistically with the larger University of Washington VOLTTRON(TM) environment, which is designed to operate UW device communications and databases as well as to perform real-time operations for research. One such research algorithm that operates simultaneously with this PV Smoothing System is an energy cost optimization system which optimizes net demand and associated cost throughout a day using the BESS. The PV Smoothing System features an active low-pass filter with an adaptable time constant, as well as adjustable limitations on the output power and accumulated battery energy of the BESS contribution. The system was analyzed using 26 days of PV generation at 1-second resolution. PV smoothing was studied with unconstrained BESS contribution as well as under a broad range of BESS constraints analogous to variable-sized storage. It was determined that a large inverter output power was more important for PV smoothing than a large battery energy capacity. Two methods of selecting the time constant in real time, static and adaptive, are studied for their impact on system performance. It was found that both systems provide a high level of PV smoothing performance, within 8% of the ideal case where the best time constant is known ahead of time. The system was run in real time using VOLTTRON(TM) with BESS limitations of 5 kW/6.5 kWh and an adaptive update period of 7 days. The system behaved as expected given the BESS parameters and time constant selection methods, providing smoothing on the PV generation and updating the time constant periodically using the adaptive time constant selection method.

  11. Gender recognition from unconstrained and articulated human body.

    PubMed

    Wu, Qin; Guo, Guodong

    2014-01-01

    Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, how many body parts are needed to combine together, and what representations are good for articulated body-based gender recognition, is also presented. This paper also pursues data fusion schemes and efficient feature dimensionality reduction based on the partial least squares estimation. Extensive experiments are performed on two unconstrained databases which have not been explored before for gender recognition.

  12. Gender Recognition from Unconstrained and Articulated Human Body

    PubMed Central

    Wu, Qin; Guo, Guodong

    2014-01-01

    Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, how many body parts are needed to combine together, and what representations are good for articulated body-based gender recognition, is also presented. This paper also pursues data fusion schemes and efficient feature dimensionality reduction based on the partial least squares estimation. Extensive experiments are performed on two unconstrained databases which have not been explored before for gender recognition. PMID:24977203

  13. The use of single-date MODIS imagery for estimating large-scale urban impervious surface fraction with spectral mixture analysis and machine learning techniques

    NASA Astrophysics Data System (ADS)

    Deng, Chengbin; Wu, Changshan

    2013-12-01

    Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.

  14. Using lod scores to detect sex differences in male-female recombination fractions.

    PubMed

    Feenstra, B; Greenberg, D A; Hodge, S E

    2004-01-01

    Human recombination fraction (RF) can differ between males and females, but investigators do not always know which disease genes are located in genomic areas of large RF sex differences. Knowledge of RF sex differences contributes to our understanding of basic biology and can increase the power of a linkage study, improve gene localization, and provide clues to possible imprinting. One way to detect these differences is to use lod scores. In this study we focused on detecting RF sex differences and answered the following questions, in both phase-known and phase-unknown matings: (1) How large a sample size is needed to detect a RF sex difference? (2) What are "optimal" proportions of paternally vs. maternally informative matings? (3) Does ascertaining nonoptimal proportions of paternally or maternally informative matings lead to ascertainment bias? Our results were as follows: (1) We calculated expected lod scores (ELODs) under two different conditions: "unconstrained," allowing sex-specific RF parameters (theta(female), theta(male)); and "constrained," requiring theta(female) = theta(male). We then examined the DeltaELOD (identical with difference between maximized constrained and unconstrained ELODs) and calculated minimum sample sizes required to achieve statistically significant DeltaELODs. For large RF sex differences, samples as small as 10 to 20 fully informative matings can achieve statistical significance. We give general sample size guidelines for detecting RF differences in informative phase-known and phase-unknown matings. (2) We defined p as the proportion of paternally informative matings in the dataset; and the optimal proportion p(circ) as that value of p that maximizes DeltaELOD. We determined that, surprisingly, p(circ) does not necessarily equal (1/2), although it does fall between approximately 0.4 and 0.6 in most situations. (3) We showed that if p in a sample deviates from its optimal value, no bias is introduced (asymptotically) to the maximum likelihood estimates of theta(female) and theta(male), even though ELOD is reduced (see point 2). This fact is important because often investigators cannot control the proportions of paternally and maternally informative families. In conclusion, it is possible to reliably detect sex differences in recombination fraction. Copyright 2004 S. Karger AG, Basel

  15. Translator for Optimizing Fluid-Handling Components

    NASA Technical Reports Server (NTRS)

    Landon, Mark; Perry, Ernest

    2007-01-01

    A software interface has been devised to facilitate optimization of the shapes of valves, elbows, fittings, and other components used to handle fluids under extreme conditions. This software interface translates data files generated by PLOT3D (a NASA grid-based plotting-and- data-display program) and by computational fluid dynamics (CFD) software into a format in which the files can be read by Sculptor, which is a shape-deformation- and-optimization program. Sculptor enables the user to interactively, smoothly, and arbitrarily deform the surfaces and volumes in two- and three-dimensional CFD models. Sculptor also includes design-optimization algorithms that can be used in conjunction with the arbitrary-shape-deformation components to perform automatic shape optimization. In the optimization process, the output of the CFD software is used as feedback while the optimizer strives to satisfy design criteria that could include, for example, improved values of pressure loss, velocity, flow quality, mass flow, etc.

  16. Optimal wavelets for biomedical signal compression.

    PubMed

    Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario

    2006-07-01

    Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.

  17. Multi-parameter geometrical scaledown study for energy optimization of MTJ and related spintronics nanodevices

    NASA Astrophysics Data System (ADS)

    Farhat, I. A. H.; Alpha, C.; Gale, E.; Atia, D. Y.; Stein, A.; Isakovic, A. F.

    The scaledown of magnetic tunnel junctions (MTJ) and related nanoscale spintronics devices poses unique challenges for energy optimization of their performance. We demonstrate the dependence of the switching current on the scaledown variable, while considering the influence of geometric parameters of MTJ, such as the free layer thickness, tfree, lateral size of the MTJ, w, and the anisotropy parameter of the MTJ. At the same time, we point out which values of the saturation magnetization, Ms, and anisotropy field, Hk, can lead to lowering the switching current and overall decrease of the energy needed to operate an MTJ. It is demonstrated that scaledown via decreasing the lateral size of the MTJ, while allowing some other parameters to be unconstrained, can improve energy performance by a measurable factor, shown to be the function of both geometric and physical parameters above. Given the complex interdependencies among both families of parameters, we developed a particle swarm optimization (PSO) algorithm that can simultaneously lower energy of operation and the switching current density. Results we obtained in scaledown study and via PSO optimization are compared to experimental results. Support by Mubadala-SRC 2012-VJ-2335 is acknowledged, as are staff at Cornell-CNF and BNL-CFN.

  18. A modified three-term PRP conjugate gradient algorithm for optimization models.

    PubMed

    Wu, Yanlin

    2017-01-01

    The nonlinear conjugate gradient (CG) algorithm is a very effective method for optimization, especially for large-scale problems, because of its low memory requirement and simplicity. Zhang et al. (IMA J. Numer. Anal. 26:629-649, 2006) firstly propose a three-term CG algorithm based on the well known Polak-Ribière-Polyak (PRP) formula for unconstrained optimization, where their method has the sufficient descent property without any line search technique. They proved the global convergence of the Armijo line search but this fails for the Wolfe line search technique. Inspired by their method, we will make a further study and give a modified three-term PRP CG algorithm. The presented method possesses the following features: (1) The sufficient descent property also holds without any line search technique; (2) the trust region property of the search direction is automatically satisfied; (3) the steplengh is bounded from below; (4) the global convergence will be established under the Wolfe line search. Numerical results show that the new algorithm is more effective than that of the normal method.

  19. An optimal output feedback gain variation scheme for the control of plants exhibiting gross parameter changes

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.

    1987-01-01

    A concept for optimally designing output feedback controllers for plants whose dynamics exhibit gross changes over their operating regimes was developed. This was to formulate the design problem in such a way that the implemented feedback gains vary as the output of a dynamical system whose independent variable is a scalar parameterization of the plant operating point. The results of this effort include derivation of necessary conditions for optimality for the general problem formulation, and for several simplified cases. The question of existence of a solution to the design problem was also examined, and it was shown that the class of gain variation schemes developed are capable of achieving gain variation histories which are arbitrarily close to the unconstrained gain solution for each point in the plant operating range. The theory was implemented in a feedback design algorithm, which was exercised in a numerical example. The results are applicable to the design of practical high-performance feedback controllers for plants whose dynamics vary significanly during operation. Many aerospace systems fall into this category.

  20. A quantum annealing approach for fault detection and diagnosis of graph-based systems

    NASA Astrophysics Data System (ADS)

    Perdomo-Ortiz, A.; Fluegemann, J.; Narasimhan, S.; Biswas, R.; Smelyanskiy, V. N.

    2015-02-01

    Diagnosing the minimal set of faults capable of explaining a set of given observations, e.g., from sensor readouts, is a hard combinatorial optimization problem usually tackled with artificial intelligence techniques. We present the mapping of this combinatorial problem to quadratic unconstrained binary optimization (QUBO), and the experimental results of instances embedded onto a quantum annealing device with 509 quantum bits. Besides being the first time a quantum approach has been proposed for problems in the advanced diagnostics community, to the best of our knowledge this work is also the first research utilizing the route Problem → QUBO → Direct embedding into quantum hardware, where we are able to implement and tackle problem instances with sizes that go beyond previously reported toy-model proof-of-principle quantum annealing implementations; this is a significant leap in the solution of problems via direct-embedding adiabatic quantum optimization. We discuss some of the programmability challenges in the current generation of the quantum device as well as a few possible ways to extend this work to more complex arbitrary network graphs.

  1. UAV path planning using artificial potential field method updated by optimal control theory

    NASA Astrophysics Data System (ADS)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  2. Mystic: Implementation of the Static Dynamic Optimal Control Algorithm for High-Fidelity, Low-Thrust Trajectory Design

    NASA Technical Reports Server (NTRS)

    Whiffen, Gregory J.

    2006-01-01

    Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.

  3. Development of free-piston Stirling engine performance and optimization codes based on Martini simulation technique

    NASA Technical Reports Server (NTRS)

    Martini, William R.

    1989-01-01

    A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.

  4. Quantum-enhanced reinforcement learning for finite-episode games with discrete state spaces

    NASA Astrophysics Data System (ADS)

    Neukart, Florian; Von Dollen, David; Seidel, Christian; Compostella, Gabriele

    2017-12-01

    Quantum annealing algorithms belong to the class of metaheuristic tools, applicable for solving binary optimization problems. Hardware implementations of quantum annealing, such as the quantum annealing machines produced by D-Wave Systems, have been subject to multiple analyses in research, with the aim of characterizing the technology's usefulness for optimization and sampling tasks. Here, we present a way to partially embed both Monte Carlo policy iteration for finding an optimal policy on random observations, as well as how to embed n sub-optimal state-value functions for approximating an improved state-value function given a policy for finite horizon games with discrete state spaces on a D-Wave 2000Q quantum processing unit (QPU). We explain how both problems can be expressed as a quadratic unconstrained binary optimization (QUBO) problem, and show that quantum-enhanced Monte Carlo policy evaluation allows for finding equivalent or better state-value functions for a given policy with the same number episodes compared to a purely classical Monte Carlo algorithm. Additionally, we describe a quantum-classical policy learning algorithm. Our first and foremost aim is to explain how to represent and solve parts of these problems with the help of the QPU, and not to prove supremacy over every existing classical policy evaluation algorithm.

  5. New hybrid conjugate gradient methods with the generalized Wolfe line search.

    PubMed

    Xu, Xiao; Kong, Fan-Yu

    2016-01-01

    The conjugate gradient method was an efficient technique for solving the unconstrained optimization problem. In this paper, we made a linear combination with parameters β k of the DY method and the HS method, and putted forward the hybrid method of DY and HS. We also proposed the hybrid of FR and PRP by the same mean. Additionally, to present the two hybrid methods, we promoted the Wolfe line search respectively to compute the step size α k of the two hybrid methods. With the new Wolfe line search, the two hybrid methods had descent property and global convergence property of the two hybrid methods that can also be proved.

  6. Graph cuts via l1 norm minimization.

    PubMed

    Bhusnurmath, Arvind; Taylor, Camillo J

    2008-10-01

    Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.

  7. Robust penalty method for structural synthesis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1983-01-01

    The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.

  8. Finite elements based on consistently assumed stresses and displacements

    NASA Technical Reports Server (NTRS)

    Pian, T. H. H.

    1985-01-01

    Finite element stiffness matrices are derived using an extended Hellinger-Reissner principle in which internal displacements are added to serve as Lagrange multipliers to introduce the equilibrium constraint in each element. In a consistent formulation the assumed stresses are initially unconstrained and complete polynomials and the total displacements are also complete such that the corresponding strains are complete in the same order as the stresses. Several examples indicate that resulting properties for elements constructed by this consistent formulation are ideal and are less sensitive to distortions of element geometries. The method has been used to find the optimal stress terms for plane elements, 3-D solids, axisymmetric solids, and plate bending elements.

  9. Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin

    2007-12-01

    We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.

  10. Advanced Structural Optimization Under Consideration of Cost Tracking

    NASA Astrophysics Data System (ADS)

    Zell, D.; Link, T.; Bickelmaier, S.; Albinger, J.; Weikert, S.; Cremaschi, F.; Wiegand, A.

    2014-06-01

    In order to improve the design process of launcher configurations in the early development phase, the software Multidisciplinary Optimization (MDO) was developed. The tool combines different efficient software tools such as Optimal Design Investigations (ODIN) for structural optimizations, Aerospace Trajectory Optimization Software (ASTOS) for trajectory and vehicle design optimization for a defined payload and mission.The present paper focuses to the integration and validation of ODIN. ODIN enables the user to optimize typical axis-symmetric structures by means of sizing the stiffening designs concerning strength and stability while minimizing the structural mass. In addition a fully automatic finite element model (FEM) generator module creates ready-to-run FEM models of a complete stage or launcher assembly.Cost tracking respectively future improvements concerning cost optimization are indicated.

  11. Performance enhancement of fin attached ice-on-coil type thermal storage tank for different fin orientations using constrained and unconstrained simulations

    NASA Astrophysics Data System (ADS)

    Kim, M. H.; Duong, X. Q.; Chung, J. D.

    2017-03-01

    One of the drawbacks in latent thermal energy storage system is the slow charging and discharging time due to the low thermal conductivity of the phase change materials (PCM). This study numerically investigated the PCM melting process inside a finned tube to determine enhanced heat transfer performance. The influences of fin length and fin numbers were investigated. Also, two different fin orientations, a vertical and horizontal type, were examined, using two different simulation methods, constrained and unconstrained. The unconstrained simulation, which considers the density difference between the solid and liquid PCM showed approximately 40 % faster melting rate than that of constrained simulation. For a precise estimation of discharging performance, unconstrained simulation is essential. Thermal instability was found in the liquid layer below the solid PCM, which is contrary to the linear stability theory, due to the strong convection driven by heat flux from the coil wall. As the fin length increases, the area affected by the fin becomes larger, thus the discharging time becomes shorter. The discharging performance also increased as the fin number increased, but the enhancement of discharging performance by more than two fins was not discernible. The horizontal type shortened the complete melting time by approximately 10 % compared to the vertical type.

  12. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  13. An effective parameter optimization with radiation balance constraints in the CAM5

    NASA Astrophysics Data System (ADS)

    Wu, L.; Zhang, T.; Qin, Y.; Lin, Y.; Xue, W.; Zhang, M.

    2017-12-01

    Uncertain parameters in physical parameterizations of General Circulation Models (GCMs) greatly impact model performance. Traditional parameter tuning methods are mostly unconstrained optimization, leading to the simulation results with optimal parameters may not meet the conditions that models have to keep. In this study, the radiation balance constraint is taken as an example, which is involved in the automatic parameter optimization procedure. The Lagrangian multiplier method is used to solve this optimization problem with constrains. In our experiment, we use CAM5 atmosphere model under 5-yr AMIP simulation with prescribed seasonal climatology of SST and sea ice. We consider the synthesized metrics using global means of radiation, precipitation, relative humidity, and temperature as the goal of optimization, and simultaneously consider the conditions that FLUT and FSNTOA should satisfy as constraints. The global average of the output variables FLUT and FSNTOA are set to be approximately equal to 240 Wm-2 in CAM5. Experiment results show that the synthesized metrics is 13.6% better than the control run. At the same time, both FLUT and FSNTOA are close to the constrained conditions. The FLUT condition is well satisfied, which is obviously better than the average annual FLUT obtained with the default parameters. The FSNTOA has a slight deviation from the observed value, but the relative error is less than 7.7‰.

  14. 3D-2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Wang, Adam S.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Aygun, Nafi; Lo, Sheng-fu L.; Wolinsky, Jean-Paul; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2015-03-01

    An image-based 3D-2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely ‘LevelCheck’) to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior-anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of the surgical product) in a manner consistent with natural surgical workflow.

  15. Genetically engineered peptides for inorganics: study of an unconstrained bacterial display technology and bulk aluminum alloy.

    PubMed

    Adams, Bryn L; Finch, Amethist S; Hurley, Margaret M; Sarkes, Deborah A; Stratis-Cullum, Dimitra N

    2013-09-06

    The first-ever peptide biomaterial discovery using an unconstrained engineered bacterial display technology is reported. Using this approach, we have developed genetically engineered peptide binders for a bulk aluminum alloy and use molecular dynamics simulation of peptide conformational fluctuations to demonstrate sequence-dependent, structure-function relationships for metal and metal oxide interactions. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Content-based unconstrained color logo and trademark retrieval with color edge gradient co-occurrence histograms

    NASA Astrophysics Data System (ADS)

    Phan, Raymond; Androutsos, Dimitrios

    2008-01-01

    In this paper, we present a logo and trademark retrieval system for unconstrained color image databases that extends the Color Edge Co-occurrence Histogram (CECH) object detection scheme. We introduce more accurate information to the CECH, by virtue of incorporating color edge detection using vector order statistics. This produces a more accurate representation of edges in color images, in comparison to the simple color pixel difference classification of edges as seen in the CECH. Our proposed method is thus reliant on edge gradient information, and as such, we call this the Color Edge Gradient Co-occurrence Histogram (CEGCH). We use this as the main mechanism for our unconstrained color logo and trademark retrieval scheme. Results illustrate that the proposed retrieval system retrieves logos and trademarks with good accuracy, and outperforms the CECH object detection scheme with higher precision and recall.

  17. Burst Testing of Triaxial Braided Composite Tubes

    NASA Technical Reports Server (NTRS)

    Salem, J. A.; Bail, J. L.; Wilmoth, N. G.; Ghosn, L. J.; Kohlman, L. W.; Roberts, G. D.; Martin, R. E.

    2014-01-01

    Applications using triaxial braided composites are limited by the materials transverse strength which is determined by the delamination capacity of unconstrained, free-edge tows. However, structural applications such as cylindrical tubes can be designed to minimize free edge effects and thus the strength with and without edge stresses is relevant to the design process. The transverse strength of triaxial braided composites without edge effects was determined by internally pressurizing tubes. In the absence of edge effects, the axial and transverse strength were comparable. In addition, notched specimens, which minimize the effect of unconstrained tow ends, were tested in a variety of geometries. Although the commonly tested notch geometries exhibited similar axial and transverse net section failure strength, significant dependence on notch configuration was observed. In the absence of unconstrained tows, failure ensues as a result of bias tow rotation, splitting, and fracture at cross-over regions.

  18. Automatic layout of structured hierarchical reports.

    PubMed

    Bakke, Eirik; Karger, David R; Miller, Robert C

    2013-12-01

    Domain-specific database applications tend to contain a sizable number of table-, form-, and report-style views that must each be designed and maintained by a software developer. A significant part of this job is the necessary tweaking of low-level presentation details such as label placements, text field dimensions, list or table styles, and so on. In this paper, we present a horizontally constrained layout management algorithm that automates the display of structured hierarchical data using the traditional visual idioms of hand-designed database UIs: tables, multi-column forms, and outline-style indented lists. We compare our system with pure outline and nested table layouts with respect to space efficiency and readability, the latter with an online user study on 27 subjects. Our layouts are 3.9 and 1.6 times more compact on average than outline layouts and horizontally unconstrained table layouts, respectively, and are as readable as table layouts even for large datasets.

  19. Implementation of Chaotic Gaussian Particle Swarm Optimization for Optimize Learning-to-Rank Software Defect Prediction Model Construction

    NASA Astrophysics Data System (ADS)

    Buchari, M. A.; Mardiyanto, S.; Hendradjaya, B.

    2018-03-01

    Finding the existence of software defect as early as possible is the purpose of research about software defect prediction. Software defect prediction activity is required to not only state the existence of defects, but also to be able to give a list of priorities which modules require a more intensive test. Therefore, the allocation of test resources can be managed efficiently. Learning to rank is one of the approach that can provide defect module ranking data for the purposes of software testing. In this study, we propose a meta-heuristic chaotic Gaussian particle swarm optimization to improve the accuracy of learning to rank software defect prediction approach. We have used 11 public benchmark data sets as experimental data. Our overall results has demonstrated that the prediction models construct using Chaotic Gaussian Particle Swarm Optimization gets better accuracy on 5 data sets, ties in 5 data sets and gets worse in 1 data sets. Thus, we conclude that the application of Chaotic Gaussian Particle Swarm Optimization in Learning-to-Rank approach can improve the accuracy of the defect module ranking in data sets that have high-dimensional features.

  20. Contingency Contractor Optimization Phase 3 Sustainment Third-Party Software List - Contingency Contractor Optimization Tool - Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durfee, Justin David; Frazier, Christopher Rawls; Bandlow, Alisa

    2016-05-01

    The Contingency Contractor Optimization Tool - Prototype (CCOT-P) requires several third-party software packages. These are documented below for each of the CCOT-P elements: client, web server, database server, solver, web application and polling application.

  1. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.

  2. Boosting quantum annealer performance via sample persistence

    NASA Astrophysics Data System (ADS)

    Karimi, Hamed; Rosenberg, Gili

    2017-07-01

    We propose a novel method for reducing the number of variables in quadratic unconstrained binary optimization problems, using a quantum annealer (or any sampler) to fix the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are usually much easier for the quantum annealer to solve, due to their being smaller and consisting of disconnected components. This approach significantly increases the success rate and number of observations of the best known energy value in samples obtained from the quantum annealer, when compared with calling the quantum annealer without using it, even when using fewer annealing cycles. Use of the method results in a considerable improvement in success metrics even for problems with high-precision couplers and biases, which are more challenging for the quantum annealer to solve. The results are further enhanced by applying the method iteratively and combining it with classical pre-processing. We present results for both Chimera graph-structured problems and embedded problems from a real-world application.

  3. The unconstrained evolution of fast and efficient antibiotic-resistant bacterial genomes.

    PubMed

    Reding-Roman, Carlos; Hewlett, Mark; Duxbury, Sarah; Gori, Fabio; Gudelj, Ivana; Beardmore, Robert

    2017-01-30

    Evolutionary trajectories are constrained by trade-offs when mutations that benefit one life history trait incur fitness costs in other traits. As resistance to tetracycline antibiotics by increased efflux can be associated with an increase in length of the Escherichia coli chromosome of 10% or more, we sought costs of resistance associated with doxycycline. However, it was difficult to identify any because the growth rate (r), carrying capacity (K) and drug efflux rate of E. coli increased during evolutionary experiments where the species was exposed to doxycycline. Moreover, these improvements remained following drug withdrawal. We sought mechanisms for this seemingly unconstrained adaptation, particularly as these traits ought to trade-off according to rK selection theory. Using prokaryote and eukaryote microorganisms, including clinical pathogens, we show that r and K can trade-off, but need not, because of 'rK trade-ups'. r and K trade-off only in sufficiently carbon-rich environments where growth is inefficient. We then used E. coli ribosomal RNA (rRNA) knockouts to determine specific mutations, namely changes in rRNA operon (rrn) copy number, than can simultaneously maximize r and K. The optimal genome has fewer operons, and therefore fewer functional ribosomes, than the ancestral strain. It is, therefore, unsurprising for r-adaptation in the presence of a ribosome-inhibiting antibiotic, doxycycline, to also increase population size. We found two costs for this improvement: an elongated lag phase and the loss of stress protection genes.

  4. Sequencing of real-world samples using a microfabricated hybrid device having unconstrained straight separation channels.

    PubMed

    Liu, Shaorong; Elkin, Christopher; Kapur, Hitesh

    2003-11-01

    We describe a microfabricated hybrid device that consists of a microfabricated chip containing multiple twin-T injectors attached to an array of capillaries that serve as the separation channels. A new fabrication process was employed to create two differently sized round channels in a chip. Twin-T injectors were formed by the smaller round channels that match the bore of the separation capillaries and separation capillaries were incorporated to the injectors through the larger round channels that match the outer diameter of the capillaries. This allows for a minimum dead volume and provides a robust chip/capillary interface. This hybrid design takes full advantage, such as sample stacking and purification and uniform signal intensity profile, of the unique chip injection scheme for DNA sequencing while employing long straight capillaries for the separations. In essence, the separation channel length is optimized for both speed and resolution since it is unconstrained by chip size. To demonstrate the reliability and practicality of this hybrid device, we sequenced over 1000 real-world samples from Human Chromosome 5 and Ciona intestinalis, prepared at Joint Genome Institute. We achieved average Phred20 read of 675 bases in about 70 min with a success rate of 91%. For the similar type of samples on MegaBACE 1000, the average Phred20 read is about 550-600 bases in 120 min separation time with a success rate of about 80-90%.

  5. Combining analysis with optimization at Langley Research Center. An evolutionary process

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.

    1982-01-01

    The evolutionary process of combining analysis and optimization codes was traced with a view toward providing insight into the long term goal of developing the methodology for an integrated, multidisciplinary software system for the concurrent analysis and optimization of aerospace structures. It was traced along the lines of strength sizing, concurrent strength and flutter sizing, and general optimization to define a near-term goal for combining analysis and optimization codes. Development of a modular software system combining general-purpose, state-of-the-art, production-level analysis computer programs for structures, aerodynamics, and aeroelasticity with a state-of-the-art optimization program is required. Incorporation of a modular and flexible structural optimization software system into a state-of-the-art finite element analysis computer program will facilitate this effort. This effort results in the software system used that is controlled with a special-purpose language, communicates with a data management system, and is easily modified for adding new programs and capabilities. A 337 degree-of-freedom finite element model is used in verifying the accuracy of this system.

  6. The Sizing and Optimization Language (SOL): A computer language to improve the user/optimizer interface

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Scotti, S. J.

    1989-01-01

    The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.

  7. DNASynth: a software application to optimization of artificial gene synthesis

    NASA Astrophysics Data System (ADS)

    Muczyński, Jan; Nowak, Robert M.

    2017-08-01

    DNASynth is a client-server software application in which the client runs in a web browser. The aim of this program is to support and optimize process of artificial gene synthesizing using Ligase Chain Reaction. Thanks to LCR it is possible to obtain DNA strand coding defined by user peptide. The DNA sequence is calculated by optimization algorithm that consider optimal codon usage, minimal energy of secondary structures and minimal number of required LCR. Additionally absence of sequences characteristic for defined by user set of restriction enzymes is guaranteed. The presented software was tested on synthetic and real data.

  8. Indirect synthesis of multidegree-of-freedom transient systems

    NASA Technical Reports Server (NTRS)

    Chen, Y. H.; Pilkey, W. D.; Kalinowski, A. J.

    1976-01-01

    The indirect synthesis method is developed and shown to be capable of leading a near-optimal design of multidegree-of-freedom and multidesign-element transient nonlinear dynamical systems. The basis of the approach is to select the open design parameters such that the response of the portion of the system being designed approximates the limiting performances solution. The limiting performance problem can be formulated as one of linear programming by replacing all portions of the system subject to transient disturbances by control forces and supposing that the remaining portions are linear as are the overall kinematic constraints. One then selects the design parameters that respond most closely to the limiting performance solution, which can be achieved by unconstrained curve-fitting techniques.

  9. Text-line extraction in handwritten Chinese documents based on an energy minimization framework.

    PubMed

    Koo, Hyung Il; Cho, Nam Ik

    2012-03-01

    Text-line extraction in unconstrained handwritten documents remains a challenging problem due to nonuniform character scale, spatially varying text orientation, and the interference between text lines. In order to address these problems, we propose a new cost function that considers the interactions between text lines and the curvilinearity of each text line. Precisely, we achieve this goal by introducing normalized measures for them, which are based on an estimated line spacing. We also present an optimization method that exploits the properties of our cost function. Experimental results on a database consisting of 853 handwritten Chinese document images have shown that our method achieves a detection rate of 99.52% and an error rate of 0.32%, which outperforms conventional methods.

  10. Adiabatic Quantum Computing via the Rydberg Blockade

    NASA Astrophysics Data System (ADS)

    Keating, Tyler; Goyal, Krittika; Deutsch, Ivan

    2012-06-01

    We study an architecture for implementing adiabatic quantum computation with trapped neutral atoms. Ground state atoms are dressed by laser fields in a manner conditional on the Rydberg blockade mechanism, thereby providing the requisite entangling interactions. As a benchmark we study the performance of a Quadratic Unconstrained Binary Optimization (QUBO) problem whose solution is found in the ground state spin configuration of an Ising-like model. We model a realistic architecture, including the effects of magnetic level structure, with qubits encoded into the clock states of ^133Cs, effective B-fields implemented through microwaves and light shifts, and atom-atom coupling achieved by excitation to a high-lying Rydberg level. Including the fundamental effects of photon scattering we find a high fidelity for the two-qubit implementation.

  11. Differential geometric treewidth estimation in adiabatic quantum computation

    NASA Astrophysics Data System (ADS)

    Wang, Chi; Jonckheere, Edmond; Brun, Todd

    2016-10-01

    The D-Wave adiabatic quantum computing platform is designed to solve a particular class of problems—the Quadratic Unconstrained Binary Optimization (QUBO) problems. Due to the particular "Chimera" physical architecture of the D-Wave chip, the logical problem graph at hand needs an extra process called minor embedding in order to be solvable on the D-Wave architecture. The latter problem is itself NP-hard. In this paper, we propose a novel polynomial-time approximation to the closely related treewidth based on the differential geometric concept of Ollivier-Ricci curvature. The latter runs in polynomial time and thus could significantly reduce the overall complexity of determining whether a QUBO problem is minor embeddable, and thus solvable on the D-Wave architecture.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Simonetto, Andrea

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less

  13. A Data-Driven Solution for Performance Improvement

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Marketed as the "Software of the Future," Optimal Engineering Systems P.I. EXPERT(TM) technology offers statistical process control and optimization techniques that are critical to businesses looking to restructure or accelerate operations in order to gain a competitive edge. Kennedy Space Center granted Optimal Engineering Systems the funding and aid necessary to develop a prototype of the process monitoring and improvement software. Completion of this prototype demonstrated that it was possible to integrate traditional statistical quality assurance tools with robust optimization techniques in a user- friendly format that is visually compelling. Using an expert system knowledge base, the software allows the user to determine objectives, capture constraints and out-of-control processes, predict results, and compute optimal process settings.

  14. Overview and Software Architecture of the Copernicus Trajectory Design and Optimization System

    NASA Technical Reports Server (NTRS)

    Williams, Jacob; Senent, Juan S.; Ocampo, Cesar; Mathur, Ravi; Davis, Elizabeth C.

    2010-01-01

    The Copernicus Trajectory Design and Optimization System represents an innovative and comprehensive approach to on-orbit mission design, trajectory analysis and optimization. Copernicus integrates state of the art algorithms in optimization, interactive visualization, spacecraft state propagation, and data input-output interfaces, allowing the analyst to design spacecraft missions to all possible Solar System destinations. All of these features are incorporated within a single architecture that can be used interactively via a comprehensive GUI interface, or passively via external interfaces that execute batch processes. This paper describes the Copernicus software architecture together with the challenges associated with its implementation. Additionally, future development and planned new capabilities are discussed. Key words: Copernicus, Spacecraft Trajectory Optimization Software.

  15. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    NASA Astrophysics Data System (ADS)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  16. An efficient and practical approach to obtain a better optimum solution for structural optimization

    NASA Astrophysics Data System (ADS)

    Chen, Ting-Yu; Huang, Jyun-Hao

    2013-08-01

    For many structural optimization problems, it is hard or even impossible to find the global optimum solution owing to unaffordable computational cost. An alternative and practical way of thinking is thus proposed in this research to obtain an optimum design which may not be global but is better than most local optimum solutions that can be found by gradient-based search methods. The way to reach this goal is to find a smaller search space for gradient-based search methods. It is found in this research that data mining can accomplish this goal easily. The activities of classification, association and clustering in data mining are employed to reduce the original design space. For unconstrained optimization problems, the data mining activities are used to find a smaller search region which contains the global or better local solutions. For constrained optimization problems, it is used to find the feasible region or the feasible region with better objective values. Numerical examples show that the optimum solutions found in the reduced design space by sequential quadratic programming (SQP) are indeed much better than those found by SQP in the original design space. The optimum solutions found in a reduced space by SQP sometimes are even better than the solution found using a hybrid global search method with approximate structural analyses.

  17. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  18. 76 FR 5832 - International Business Machines (IBM), Software Group Business Unit, Optim Data Studio Tools QA...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-02

    ... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-74,554] International Business Machines (IBM), Software Group Business Unit, Optim Data Studio Tools QA, San Jose, CA; Notice of... determination of the TAA petition filed on behalf of workers at International Business Machines (IBM), Software...

  19. General purpose optimization software for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1990-01-01

    The author has developed several general purpose optimization programs over the past twenty years. The earlier programs were developed as research codes and served that purpose reasonably well. However, in taking the formal step from research to industrial application programs, several important lessons have been learned. Among these are the importance of clear documentation, immediate user support, and consistent maintenance. Most important has been the issue of providing software that gives a good, or at least acceptable, design at minimum computational cost. Here, the basic issues developing optimization software for industrial applications are outlined and issues of convergence rate, reliability, and relative minima are discussed. Considerable feedback has been received from users, and new software is being developed to respond to identified needs. The basic capabilities of this software are outlined. A major motivation for the development of commercial grade software is ease of use and flexibility, and these issues are discussed with reference to general multidisciplinary applications. It is concluded that design productivity can be significantly enhanced by the more widespread use of optimization as an everyday design tool.

  20. Sulcal set optimization for cortical surface registration.

    PubMed

    Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M

    2010-04-15

    Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.

  1. Software for Optimizing Quality Assurance of Other Software

    NASA Technical Reports Server (NTRS)

    Feather, Martin; Cornford, Steven; Menzies, Tim

    2004-01-01

    Software assurance is the planned and systematic set of activities that ensures that software processes and products conform to requirements, standards, and procedures. Examples of such activities are the following: code inspections, unit tests, design reviews, performance analyses, construction of traceability matrices, etc. In practice, software development projects have only limited resources (e.g., schedule, budget, and availability of personnel) to cover the entire development effort, of which assurance is but a part. Projects must therefore select judiciously from among the possible assurance activities. At its heart, this can be viewed as an optimization problem; namely, to determine the allocation of limited resources (time, money, and personnel) to minimize risk or, alternatively, to minimize the resources needed to reduce risk to an acceptable level. The end result of the work reported here is a means to optimize quality-assurance processes used in developing software.

  2. Efficiency of unconstrained minimization techniques in nonlinear analysis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.; Knight, N. F., Jr.

    1978-01-01

    Unconstrained minimization algorithms have been critically evaluated for their effectiveness in solving structural problems involving geometric and material nonlinearities. The algorithms have been categorized as being zeroth, first, or second order depending upon the highest derivative of the function required by the algorithm. The sensitivity of these algorithms to the accuracy of derivatives clearly suggests using analytically derived gradients instead of finite difference approximations. The use of analytic gradients results in better control of the number of minimizations required for convergence to the exact solution.

  3. Control pole placement relationships

    NASA Technical Reports Server (NTRS)

    Ainsworth, O. R.

    1982-01-01

    Using a simplified Large Space Structure (LSS) model, a technique was developed which gives algebraic relationships for the unconstrained poles. The relationships, which were obtained by this technique, are functions of the structural characteristics and the control gains. Extremely interesting relationships evolve for the case when the structural damping is zero. If the damping is zero, the constrained poles are uncoupled from the structural mode shapes. These relationships, which are derived for structural damping and without structural damping, provide new insight into the migration of the unconstrained poles for the CFPPS.

  4. Particle Swarm Optimization Toolbox

    NASA Technical Reports Server (NTRS)

    Grant, Michael J.

    2010-01-01

    The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry trajectory and guidance design for the Mars Science Laboratory mission but may be applied to any optimization problem.

  5. Transfers between libration-point orbits in the elliptic restricted problem

    NASA Astrophysics Data System (ADS)

    Hiday-Johnston, L. A.; Howell, K. C.

    1994-04-01

    A strategy is formulated to design optimal time-fixed impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior L1 libration point of the Sun-Earth/Moon barycenter system. The adjoint equation in terms of rotating coordinates in the elliptic restricted three-body problem is shown to be of a distinctly different form from that obtained in the analysis of trajectories in the two-body problem. Also, the necessary conditions for a time-fixed two-impulse transfer to be optimal are stated in terms of the primer vector. Primer vector theory is then extended to nonoptimal impulsive trajectories in order to establish a criterion whereby the addition of an interior impulse reduces total fuel expenditure. The necessary conditions for the local optimality of a transfer containing additional impulses are satisfied by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses. Determination of location, orientation, and magnitude of each additional impulse is accomplished by the unconstrained minimization of the cost function using a multivariable search method. Results indicate that substantial savings in fuel can be achieved by the addition of interior impulsive maneuvers on transfers between libration-point orbits.

  6. Optimal apparent damping as a function of the bandwidth of an array of vibration absorbers.

    PubMed

    Vignola, Joseph; Glean, Aldo; Judge, John; Ryan, Teresa

    2013-08-01

    The transient response of a resonant structure can be altered by the attachment of one or more substantially smaller resonators. Considered here is a coupled array of damped harmonic oscillators whose resonant frequencies are distributed across a frequency band that encompasses the natural frequency of the primary structure. Vibration energy introduced to the primary structure, which has little to no intrinsic damping, is transferred into and trapped by the attached array. It is shown that, when the properties of the array are optimized to reduce the settling time of the primary structure's transient response, the apparent damping is approximately proportional to the bandwidth of the array (the span of resonant frequencies of the attached oscillators). Numerical simulations were conducted using an unconstrained nonlinear minimization algorithm to find system parameters that result in the fastest settling time. This minimization was conducted for a range of system characteristics including the overall bandwidth of the array, the ratio of the total array mass to that of the primary structure, and the distributions of mass, stiffness, and damping among the array elements. This paper reports optimal values of these parameters and demonstrates that the resulting minimum settling time decreases with increasing bandwidth.

  7. The mathematical statement for the solving of the problem of N-version software system design

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.

    2015-10-01

    The N-version programming, as a methodology of the fault-tolerant software systems design, allows successful solving of the mentioned tasks. The use of N-version programming approach turns out to be effective, since the system is constructed out of several parallel executed versions of some software module. Those versions are written to meet the same specification but by different programmers. The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality.

  8. Arsenic removal from contaminated groundwater by membrane-integrated hybrid plant: optimization and control using Visual Basic platform.

    PubMed

    Chakrabortty, S; Sen, M; Pal, P

    2014-03-01

    A simulation software (ARRPA) has been developed in Microsoft Visual Basic platform for optimization and control of a novel membrane-integrated arsenic separation plant in the backdrop of absence of such software. The user-friendly, menu-driven software is based on a dynamic linearized mathematical model, developed for the hybrid treatment scheme. The model captures the chemical kinetics in the pre-treating chemical reactor and the separation and transport phenomena involved in nanofiltration. The software has been validated through extensive experimental investigations. The agreement between the outputs from computer simulation program and the experimental findings are excellent and consistent under varying operating conditions reflecting high degree of accuracy and reliability of the software. High values of the overall correlation coefficient (R (2) = 0.989) and Willmott d-index (0.989) are indicators of the capability of the software in analyzing performance of the plant. The software permits pre-analysis, manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. Performance analysis of the whole system as well as the individual units is possible using the tool. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for removal of arsenic from contaminated groundwater.

  9. Overcoming free energy barriers using unconstrained molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Hénin, Jérôme; Chipot, Christophe

    2004-08-01

    Association of unconstrained molecular dynamics (MD) and the formalisms of thermodynamic integration and average force [Darve and Pohorille, J. Chem. Phys. 115, 9169 (2001)] have been employed to determine potentials of mean force. When implemented in a general MD code, the additional computational effort, compared to other standard, unconstrained simulations, is marginal. The force acting along a chosen reaction coordinate ξ is estimated from the individual forces exerted on the chemical system and accumulated as the simulation progresses. The estimated free energy derivative computed for small intervals of ξ is canceled by an adaptive bias to overcome the barriers of the free energy landscape. Evolution of the system along the reaction coordinate is, thus, limited by its sole self-diffusion properties. The illustrative examples of the reversible unfolding of deca-L-alanine, the association of acetate and guanidinium ions in water, the dimerization of methane in water, and its transfer across the water liquid-vapor interface are examined to probe the efficiency of the method.

  10. Overcoming free energy barriers using unconstrained molecular dynamics simulations.

    PubMed

    Hénin, Jérôme; Chipot, Christophe

    2004-08-15

    Association of unconstrained molecular dynamics (MD) and the formalisms of thermodynamic integration and average force [Darve and Pohorille, J. Chem. Phys. 115, 9169 (2001)] have been employed to determine potentials of mean force. When implemented in a general MD code, the additional computational effort, compared to other standard, unconstrained simulations, is marginal. The force acting along a chosen reaction coordinate xi is estimated from the individual forces exerted on the chemical system and accumulated as the simulation progresses. The estimated free energy derivative computed for small intervals of xi is canceled by an adaptive bias to overcome the barriers of the free energy landscape. Evolution of the system along the reaction coordinate is, thus, limited by its sole self-diffusion properties. The illustrative examples of the reversible unfolding of deca-L-alanine, the association of acetate and guanidinium ions in water, the dimerization of methane in water, and its transfer across the water liquid-vapor interface are examined to probe the efficiency of the method. (c) 2004 American Institute of Physics.

  11. Improved Ant Algorithms for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  12. ConvAn: a convergence analyzing tool for optimization of biochemical networks.

    PubMed

    Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils

    2012-01-01

    Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. Contingency Contractor Optimization Phase 3 Sustainment Software Design Document - Contingency Contractor Optimization Tool - Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durfee, Justin David; Frazier, Christopher Rawls; Bandlow, Alisa

    This document describes the final software design of the Contingency Contractor Optimization Tool - Prototype. Its purpose is to provide the overall architecture of the software and the logic behind this architecture. Documentation for the individual classes is provided in the application Javadoc. The Contingency Contractor Optimization project is intended to address Department of Defense mandates by delivering a centralized strategic planning tool that allows senior decision makers to quickly and accurately assess the impacts, risks, and mitigation strategies associated with utilizing contract support. The Contingency Contractor Optimization Tool - Prototype was developed in Phase 3 of the OSD ATLmore » Contingency Contractor Optimization project to support strategic planning for contingency contractors. The planning tool uses a model to optimize the Total Force mix by minimizing the combined total costs for selected mission scenarios. The model optimizes the match of personnel types (military, DoD civilian, and contractors) and capabilities to meet mission requirements as effectively as possible, based on risk, cost, and other requirements.« less

  14. Design Genetic Algorithm Optimization Education Software Based Fuzzy Controller for a Tricopter Fly Path Planning

    ERIC Educational Resources Information Center

    Tran, Huu-Khoa; Chiou, Juing -Shian; Peng, Shou-Tao

    2016-01-01

    In this paper, the feasibility of a Genetic Algorithm Optimization (GAO) education software based Fuzzy Logic Controller (GAO-FLC) for simulating the flight motion control of Unmanned Aerial Vehicles (UAVs) is designed. The generated flight trajectories integrate the optimized Scaling Factors (SF) fuzzy controller gains by using GAO algorithm. The…

  15. Hybrid PV/diesel solar power system design using multi-level factor analysis optimization

    NASA Astrophysics Data System (ADS)

    Drake, Joshua P.

    Solar power systems represent a large area of interest across a spectrum of organizations at a global level. It was determined that a clear understanding of current state of the art software and design methods, as well as optimization methods, could be used to improve the design methodology. Solar power design literature was researched for an in depth understanding of solar power system design methods and algorithms. Multiple software packages for the design and optimization of solar power systems were analyzed for a critical understanding of their design workflow. In addition, several methods of optimization were studied, including brute force, Pareto analysis, Monte Carlo, linear and nonlinear programming, and multi-way factor analysis. Factor analysis was selected as the most efficient optimization method for engineering design as it applied to solar power system design. The solar power design algorithms, software work flow analysis, and factor analysis optimization were combined to develop a solar power system design optimization software package called FireDrake. This software was used for the design of multiple solar power systems in conjunction with an energy audit case study performed in seven Tibetan refugee camps located in Mainpat, India. A report of solar system designs for the camps, as well as a proposed schedule for future installations was generated. It was determined that there were several improvements that could be made to the state of the art in modern solar power system design, though the complexity of current applications is significant.

  16. Design and Optimization Method of a Two-Disk Rotor System

    NASA Astrophysics Data System (ADS)

    Huang, Jingjing; Zheng, Longxi; Mei, Qing

    2016-04-01

    An integrated analytical method based on multidisciplinary optimization software Isight and general finite element software ANSYS was proposed in this paper. Firstly, a two-disk rotor system was established and the mode, humorous response and transient response at acceleration condition were analyzed with ANSYS. The dynamic characteristics of the two-disk rotor system were achieved. On this basis, the two-disk rotor model was integrated to the multidisciplinary design optimization software Isight. According to the design of experiment (DOE) and the dynamic characteristics, the optimization variables, optimization objectives and constraints were confirmed. After that, the multi-objective design optimization of the transient process was carried out with three different global optimization algorithms including Evolutionary Optimization Algorithm, Multi-Island Genetic Algorithm and Pointer Automatic Optimizer. The optimum position of the two-disk rotor system was obtained at the specified constraints. Meanwhile, the accuracy and calculation numbers of different optimization algorithms were compared. The optimization results indicated that the rotor vibration reached the minimum value and the design efficiency and quality were improved by the multidisciplinary design optimization in the case of meeting the design requirements, which provided the reference to improve the design efficiency and reliability of the aero-engine rotor.

  17. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization

    PubMed Central

    Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang

    2015-01-01

    It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence—with at most a linear convergence rate—because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method. PMID:26381742

  18. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization.

    PubMed

    Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang

    2015-01-01

    It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence--with at most a linear convergence rate--because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method.

  19. Computer aided design of monolithic microwave and millimeter wave integrated circuits and subsystems

    NASA Astrophysics Data System (ADS)

    Ku, Walter H.

    1989-05-01

    The objectives of this research are to develop analytical and computer aided design techniques for monolithic microwave and millimeter wave integrated circuits (MMIC and MIMIC) and subsystems and to design and fabricate those ICs. Emphasis was placed on heterojunction-based devices, especially the High Electron Mobility Transition (HEMT), for both low noise and medium power microwave and millimeter wave applications. Circuits to be considered include monolithic low noise amplifiers, power amplifiers, and distributed and feedback amplifiers. Interactive computer aided design programs were developed, which include large signal models of InP MISFETs and InGaAs HEMTs. Further, a new unconstrained optimization algorithm POSM was developed and implemented in the general Analysis and Design program for Integrated Circuit (ADIC) for assistance in the design of largesignal nonlinear circuits.

  20. Structural optimization via a design space hierarchy

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1976-01-01

    Mathematical programming techniques provide a general approach to automated structural design. An iterative method is proposed in which design is treated as a hierarchy of subproblems, one being locally constrained and the other being locally unconstrained. It is assumed that the design space is locally convex in the case of good initial designs and that the objective and constraint functions are continuous, with continuous first derivatives. A general design algorithm is outlined for finding a move direction which will decrease the value of the objective function while maintaining a feasible design. The case of one-dimensional search in a two-variable design space is discussed. Possible applications are discussed. A major feature of the proposed algorithm is its application to problems which are inherently ill-conditioned, such as design of structures for optimum geometry.

  1. Performance analysis and optimization of an advanced pharmaceutical wastewater treatment plant through a visual basic software tool (PWWT.VB).

    PubMed

    Pal, Parimal; Thakura, Ritwik; Chakrabortty, Sankha

    2016-05-01

    A user-friendly, menu-driven simulation software tool has been developed for the first time to optimize and analyze the system performance of an advanced continuous membrane-integrated pharmaceutical wastewater treatment plant. The software allows pre-analysis and manipulation of input data which helps in optimization and shows the software performance visually on a graphical platform. Moreover, the software helps the user to "visualize" the effects of the operating parameters through its model-predicted output profiles. The software is based on a dynamic mathematical model, developed for a systematically integrated forward osmosis-nanofiltration process for removal of toxic organic compounds from pharmaceutical wastewater. The model-predicted values have been observed to corroborate well with the extensive experimental investigations which were found to be consistent under varying operating conditions like operating pressure, operating flow rate, and draw solute concentration. Low values of the relative error (RE = 0.09) and high values of Willmott-d-index (d will = 0.981) reflected a high degree of accuracy and reliability of the software. This software is likely to be a very efficient tool for system design or simulation of an advanced membrane-integrated treatment plant for hazardous wastewater.

  2. Reflected stochastic differential equation models for constrained animal movement

    USGS Publications Warehouse

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  3. Path Following in the Exact Penalty Method of Convex Programming.

    PubMed

    Zhou, Hua; Lange, Kenneth

    2015-07-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.

  4. Learning to detect and combine the features of an object

    PubMed Central

    Suchow, Jordan W.; Pelli, Denis G.

    2013-01-01

    To recognize an object, it is widely supposed that we first detect and then combine its features. Familiar objects are recognized effortlessly, but unfamiliar objects—like new faces or foreign-language letters—are hard to distinguish and must be learned through practice. Here, we describe a method that separates detection and combination and reveals how each improves as the observer learns. We dissociate the steps by two independent manipulations: For each step, we do or do not provide a bionic crutch that performs it optimally. Thus, the two steps may be performed solely by the human, solely by the crutches, or cooperatively, when the human takes one step and a crutch takes the other. The crutches reveal a double dissociation between detecting and combining. Relative to the two-step ideal, the human observer’s overall efficiency for unconstrained identification equals the product of the efficiencies with which the human performs the steps separately. The two-step strategy is inefficient: Constraining the ideal to take two steps roughly halves its identification efficiency. In contrast, we find that humans constrained to take two steps perform just as well as when unconstrained, which suggests that they normally take two steps. Measuring threshold contrast (the faintness of a barely identifiable letter) as it improves with practice, we find that detection is inefficient and learned slowly. Combining is learned at a rate that is 4× higher and, after 1,000 trials, 7× more efficient. This difference explains much of the diversity of rates reported in perceptual learning studies, including effects of complexity and familiarity. PMID:23267067

  5. Path Following in the Exact Penalty Method of Convex Programming

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2015-01-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044

  6. A Language for Specifying Compiler Optimizations for Generic Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willcock, Jeremiah J.

    2007-01-01

    Compiler optimization is important to software performance, and modern processor architectures make optimization even more critical. However, many modern software applications use libraries providing high levels of abstraction. Such libraries often hinder effective optimization — the libraries are difficult to analyze using current compiler technology. For example, high-level libraries often use dynamic memory allocation and indirectly expressed control structures, such as iteratorbased loops. Programs using these libraries often cannot achieve an optimal level of performance. On the other hand, software libraries have also been recognized as potentially aiding in program optimization. One proposed implementation of library-based optimization is to allowmore » the library author, or a library user, to define custom analyses and optimizations. Only limited systems have been created to take advantage of this potential, however. One problem in creating a framework for defining new optimizations and analyses is how users are to specify them: implementing them by hand inside a compiler is difficult and prone to errors. Thus, a domain-specific language for librarybased compiler optimizations would be beneficial. Many optimization specification languages have appeared in the literature, but they tend to be either limited in power or unnecessarily difficult to use. Therefore, I have designed, implemented, and evaluated the Pavilion language for specifying program analyses and optimizations, designed for library authors and users. These analyses and optimizations can be based on the implementation of a particular library, its use in a specific program, or on the properties of a broad range of types, expressed through concepts. The new system is intended to provide a high level of expressiveness, even though the intended users are unlikely to be compiler experts.« less

  7. Dynamic optimization and its relation to classical and quantum constrained systems

    NASA Astrophysics Data System (ADS)

    Contreras, Mauricio; Pellicer, Rely; Villena, Marcelo

    2017-08-01

    We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop λ-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Ψ(x , t) =e iS(x , t) in the quantum Schrödinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x , t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem.

  8. State transformations and Hamiltonian structures for optimal control in discrete systems

    NASA Astrophysics Data System (ADS)

    Sieniutycz, S.

    2006-04-01

    Preserving usual definition of Hamiltonian H as the scalar product of rates and generalized momenta we investigate two basic classes of discrete optimal control processes governed by the difference rather than differential equations for the state transformation. The first class, linear in the time interval θ, secures the constancy of optimal H and satisfies a discrete Hamilton-Jacobi equation. The second class, nonlinear in θ, does not assure the constancy of optimal H and satisfies only a relationship that may be regarded as an equation of Hamilton-Jacobi type. The basic question asked is if and when Hamilton's canonical structures emerge in optimal discrete systems. For a constrained discrete control, general optimization algorithms are derived that constitute powerful theoretical and computational tools when evaluating extremum properties of constrained physical systems. The mathematical basis is Bellman's method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage optimality criterion which allows a variation of the terminal state that is otherwise fixed in Bellman's method. For systems with unconstrained intervals of the holdup time θ two powerful optimization algorithms are obtained: an unconventional discrete algorithm with a constant H and its counterpart for models nonlinear in θ. We also present the time-interval-constrained extension of the second algorithm. The results are general; namely, one arrives at: discrete canonical equations of Hamilton, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory, along with basic results of variational calculus. A vast spectrum of applications and an example are briefly discussed with particular attention paid to models nonlinear in the time interval θ.

  9. Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks

    PubMed Central

    Chen, Jianhui; Liu, Ji; Ye, Jieping

    2013-01-01

    We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms. PMID:24077658

  10. Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks.

    PubMed

    Chen, Jianhui; Liu, Ji; Ye, Jieping

    2012-02-01

    We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.

  11. Body stability and muscle and motor cortex activity during walking with wide stance

    PubMed Central

    Farrell, Brad J.; Bulgakova, Margarita A.; Beloozerova, Irina N.; Sirota, Mikhail G.

    2014-01-01

    Biomechanical and neural mechanisms of balance control during walking are still poorly understood. In this study, we examined the body dynamic stability, activity of limb muscles, and activity of motor cortex neurons [primarily pyramidal tract neurons (PTNs)] in the cat during unconstrained walking and walking with a wide base of support (wide-stance walking). By recording three-dimensional full-body kinematics we found for the first time that during unconstrained walking the cat is dynamically unstable in the forward direction during stride phases when only two diagonal limbs support the body. In contrast to standing, an increased lateral between-paw distance during walking dramatically decreased the cat's body dynamic stability in double-support phases and prompted the cat to spend more time in three-legged support phases. Muscles contributing to abduction-adduction actions had higher activity during stance, while flexor muscles had higher activity during swing of wide-stance walking. The overwhelming majority of neurons in layer V of the motor cortex, 82% and 83% in the forelimb and hindlimb representation areas, respectively, were active differently during wide-stance walking compared with unconstrained condition, most often by having a different depth of stride-related frequency modulation along with a different mean discharge rate and/or preferred activity phase. Upon transition from unconstrained to wide-stance walking, proximal limb-related neuronal groups subtly but statistically significantly shifted their activity toward the swing phase, the stride phase where most of body instability occurs during this task. The data suggest that the motor cortex participates in maintenance of body dynamic stability during locomotion. PMID:24790167

  12. Comparing kinematic changes between a finger-tapping task and unconstrained finger flexion-extension task in patients with Parkinson's disease.

    PubMed

    Teo, W P; Rodrigues, J P; Mastaglia, F L; Thickbroom, G W

    2013-06-01

    Repetitive finger tapping is a well-established clinical test for the evaluation of parkinsonian bradykinesia, but few studies have investigated other finger movement modalities. We compared the kinematic changes (movement rate and amplitude) and response to levodopa during a conventional index finger-thumb-tapping task and an unconstrained index finger flexion-extension task performed at maximal voluntary rate (MVR) for 20 s in 11 individuals with levodopa-responsive Parkinson's disease (OFF and ON) and 10 healthy age-matched controls. Between-task comparisons showed that for all conditions, the initial movement rate was greater for the unconstrained flexion-extension task than the tapping task. Movement rate in the OFF state was slower than in controls for both tasks and normalized in the ON state. The movement amplitude was also reduced for both tasks in OFF and increased in the ON state but did not reach control levels. The rate and amplitude of movement declined significantly for both tasks under all conditions (OFF/ON and controls). The time course of rate decline was comparable for both tasks and was similar in OFF/ON and controls, whereas the tapping task was associated with a greater decline in MA, both in controls and ON, but not OFF. The findings indicate that both finger movement tasks show similar kinematic changes during a 20-s sustained MVR, but that movement amplitude is less well sustained during the tapping task than the unconstrained finger movement task. Both movement rate and amplitude improved with levodopa; however, movement rate was more levodopa responsive than amplitude.

  13. Development and Application of Collaborative Optimization Software for Plate - fin Heat Exchanger

    NASA Astrophysics Data System (ADS)

    Chunzhen, Qiao; Ze, Zhang; Jiangfeng, Guo; Jian, Zhang

    2017-12-01

    This paper introduces the design ideas of the calculation software and application examples for plate - fin heat exchangers. Because of the large calculation quantity in the process of designing and optimizing heat exchangers, we used Visual Basic 6.0 as a software development carrier to design a basic calculation software to reduce the calculation quantity. Its design condition is plate - fin heat exchanger which was designed according to the boiler tail flue gas. The basis of the software is the traditional design method of the plate-fin heat exchanger. Using the software for design and calculation of plate-fin heat exchangers, discovery will effectively reduce the amount of computation, and similar to traditional methods, have a high value.

  14. Using prior information to separate the temperature response to greenhouse gas forcing from that of aerosols - Estimating the transient climate response

    NASA Astrophysics Data System (ADS)

    Schurer, Andrew; Hegerl, Gabriele

    2016-04-01

    The evaluation of the transient climate response (TCR) is of critical importance to policy makers as it can be used to calculate a simple estimate of the expected warming given predicted greenhouse gas emissions. Previous studies using optimal detection techniques have been able to estimate a TCR value from the historic record using simulations from some of the models which took part in the Coupled Model Intercomparison Project Phase 5 (CMIP5) but have found that others give unconstrained results. At least partly this is due to degeneracy between the greenhouse gas and aerosol signals which makes separation of the temperature response to these forcings problematic. Here we re-visit this important topic by using an adapted optimal detection analysis within a Bayesian framework. We account for observational uncertainty by the use of an ensemble of instrumental observations, and model uncertainty by combining the results from several different models. This framework allows the use of prior information which is found to help separate the response to the different forcings leading to a more constrained estimate of TCR.

  15. Rate-Based Model Predictive Control of Turbofan Engine Clearance

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan A.

    2006-01-01

    An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.

  16. Slice-to-Volume Nonrigid Registration of Histological Sections to MR Images of the Human Brain

    PubMed Central

    Osechinskiy, Sergey; Kruggel, Frithjof

    2011-01-01

    Registration of histological images to three-dimensional imaging modalities is an important step in quantitative analysis of brain structure, in architectonic mapping of the brain, and in investigation of the pathology of a brain disease. Reconstruction of histology volume from serial sections is a well-established procedure, but it does not address registration of individual slices from sparse sections, which is the aim of the slice-to-volume approach. This study presents a flexible framework for intensity-based slice-to-volume nonrigid registration algorithms with a geometric transformation deformation field parametrized by various classes of spline functions: thin-plate splines (TPS), Gaussian elastic body splines (GEBS), or cubic B-splines. Algorithms are applied to cross-modality registration of histological and magnetic resonance images of the human brain. Registration performance is evaluated across a range of optimization algorithms and intensity-based cost functions. For a particular case of histological data, best results are obtained with a TPS three-dimensional (3D) warp, a new unconstrained optimization algorithm (NEWUOA), and a correlation-coefficient-based cost function. PMID:22567290

  17. Precise regional baseline estimation using a priori orbital information

    NASA Technical Reports Server (NTRS)

    Lindqwister, Ulf J.; Lichten, Stephen M.; Blewitt, Geoffrey

    1990-01-01

    A solution using GPS measurements acquired during the CASA Uno campaign has resulted in 3-4 mm horizontal daily baseline repeatability and 13 mm vertical repeatability for a 729 km baseline, located in North America. The agreement with VLBI is at the level of 10-20 mm for all components. The results were obtained with the GIPSY orbit determination and baseline estimation software and are based on five single-day data arcs spanning the 20, 21, 25, 26, and 27 of January, 1988. The estimation strategy included resolving the carrier phase integer ambiguities, utilizing an optial set of fixed reference stations, and constraining GPS orbit parameters by applying a priori information. A multiday GPS orbit and baseline solution has yielded similar 2-4 mm horizontal daily repeatabilities for the same baseline, consistent with the constrained single-day arc solutions. The application of weak constraints to the orbital state for single-day data arcs produces solutions which approach the precise orbits obtained with unconstrained multiday arc solutions.

  18. Adiabatic quantum computation with neutral atoms via the Rydberg blockade

    NASA Astrophysics Data System (ADS)

    Goyal, Krittika; Deutsch, Ivan

    2011-05-01

    We study a trapped-neutral-atom implementation of the adiabatic model of quantum computation whereby the Hamiltonian of a set of interacting qubits is changed adiabatically so that its ground state evolves to the desired output of the algorithm. We employ the ``Rydberg blockade interaction,'' which previously has been used to implement two-qubit entangling gates in the quantum circuit model. Here it is employed via off-resonant virtual dressing of the excited levels, so that atoms always remain in the ground state. The resulting dressed-Rydberg interaction is insensitive to the distance between the atoms within a certain blockade radius, making this process robust to temperature and vibrational fluctuations. Single qubit interactions are implemented with global microwaves and atoms are locally addressed with light shifts. With these ingredients, we study a protocol to implement the two-qubit Quadratic Unconstrained Binary Optimization (QUBO) problem. We model atom trapping, addressing, coherent evolution, and decoherence. We also explore collective control of the many-atom system and generalize the QUBO problem to multiple qubits. We study a trapped-neutral-atom implementation of the adiabatic model of quantum computation whereby the Hamiltonian of a set of interacting qubits is changed adiabatically so that its ground state evolves to the desired output of the algorithm. We employ the ``Rydberg blockade interaction,'' which previously has been used to implement two-qubit entangling gates in the quantum circuit model. Here it is employed via off-resonant virtual dressing of the excited levels, so that atoms always remain in the ground state. The resulting dressed-Rydberg interaction is insensitive to the distance between the atoms within a certain blockade radius, making this process robust to temperature and vibrational fluctuations. Single qubit interactions are implemented with global microwaves and atoms are locally addressed with light shifts. With these ingredients, we study a protocol to implement the two-qubit Quadratic Unconstrained Binary Optimization (QUBO) problem. We model atom trapping, addressing, coherent evolution, and decoherence. We also explore collective control of the many-atom system and generalize the QUBO problem to multiple qubits. We acknowledge funding from the AQUARIUS project, Sandia National Laboratories

  19. A second-order unconstrained optimization method for canonical-ensemble density-functional methods

    NASA Astrophysics Data System (ADS)

    Nygaard, Cecilie R.; Olsen, Jeppe

    2013-03-01

    A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.

  20. Toolkit of Available EPA Green Infrastructure Modeling Software: Watershed Management Optimization Support Tool (WMOST)

    EPA Science Inventory

    Watershed Management Optimization Support Tool (WMOST) is a software application designed tofacilitate integrated water resources management across wet and dry climate regions. It allows waterresources managers and planners to screen a wide range of practices across their watersh...

  1. Integrating Multibody Simulation and CFD: toward Complex Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Pieri, Stefano; Poloni, Carlo; Mühlmeier, Martin

    This paper describes the use of integrated multidisciplinary analysis and optimization of a race car model on a predefined circuit. The objective is the definition of the most efficient geometric configuration that can guarantee the lowest lap time. In order to carry out this study it has been necessary to interface the design optimization software modeFRONTIER with the following softwares: CATIA v5, a three dimensional CAD software, used for the definition of the parametric geometry; A.D.A.M.S./Motorsport, a multi-body dynamic simulation software; IcemCFD, a mesh generator, for the automatic generation of the CFD grid; CFX, a Navier-Stokes code, for the fluid-dynamic forces prediction. The process integration gives the possibility to compute, for each geometrical configuration, a set of aerodynamic coefficients that are then used in the multiboby simulation for the computation of the lap time. Finally an automatic optimization procedure is started and the lap-time minimized. The whole process is executed on a Linux cluster running CFD simulations in parallel.

  2. Development and use of mathematical models and software frameworks for integrated analysis of agricultural systems and associated water use impacts

    USGS Publications Warehouse

    Fowler, K. R.; Jenkins, E.W.; Parno, M.; Chrispell, J.C.; Colón, A. I.; Hanson, Randall T.

    2016-01-01

    The development of appropriate water management strategies requires, in part, a methodology for quantifying and evaluating the impact of water policy decisions on regional stakeholders. In this work, we describe the framework we are developing to enhance the body of resources available to policy makers, farmers, and other community members in their e orts to understand, quantify, and assess the often competing objectives water consumers have with respect to usage. The foundation for the framework is the construction of a simulation-based optimization software tool using two existing software packages. In particular, we couple a robust optimization software suite (DAKOTA) with the USGS MF-OWHM water management simulation tool to provide a flexible software environment that will enable the evaluation of one or multiple (possibly competing) user-defined (or stakeholder) objectives. We introduce the individual software components and outline the communication strategy we defined for the coupled development. We present numerical results for case studies related to crop portfolio management with several defined objectives. The objectives are not optimally satisfied for any single user class, demonstrating the capability of the software tool to aid in the evaluation of a variety of competing interests.

  3. The disagreement between the ideal observer and human observers in hardware and software imaging system optimization: theoretical explanations and evidence

    NASA Astrophysics Data System (ADS)

    He, Xin

    2017-03-01

    The ideal observer is widely used in imaging system optimization. One practical question remains open: do the ideal and human observers have the same preference in system optimization and evaluation? Based on the ideal observer's mathematical properties proposed by Barrett et. al. and the empirical properties of human observers investigated by Myers et. al., I attempt to pursue the general rules regarding the applicability of the ideal observer in system optimization. Particularly, in software optimization, the ideal observer pursues data conservation while humans pursue data presentation or perception. In hardware optimization, the ideal observer pursues a system with the maximum total information, while humans pursue a system with the maximum selected (e.g., certain frequency bands) information. These different objectives may result in different system optimizations between human and the ideal observers. Thus, an ideal observer optimized system is not necessarily optimal for humans. I cite empirical evidence in search and detection tasks, in hardware and software evaluation, in X-ray CT, pinhole imaging, as well as emission computed tomography to corroborate the claims. (Disclaimer: the views expressed in this work do not necessarily represent those of the FDA)

  4. Adaptive velocity-based six degree of freedom load control for real-time unconstrained biomechanical testing.

    PubMed

    Lawless, I M; Ding, B; Cazzolato, B S; Costi, J J

    2014-09-22

    Robotic biomechanics is a powerful tool for further developing our understanding of biological joints, tissues and their repair. Both velocity-based and hybrid force control methods have been applied to biomechanics but the complex and non-linear properties of joints have limited these to slow or stepwise loading, which may not capture the real-time behaviour of joints. This paper presents a novel force control scheme combining stiffness and velocity based methods aimed at achieving six degree of freedom unconstrained force control at physiological loading rates. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Resolving ice cloud optical thickness biases between CALIOP and MODIS using infrared retrievals

    NASA Astrophysics Data System (ADS)

    Holz, R. E.; Platnick, S.; Meyer, K.; Vaughan, M.; Heidinger, A.; Yang, P.; Wind, G.; Dutcher, S.; Ackerman, S.; Amarasinghe, N.; Nagle, F.; Wang, C.

    2015-10-01

    Despite its importance as one of the key radiative properties that determines the impact of upper tropospheric clouds on the radiation balance, ice cloud optical thickness (IOT) has proven to be one of the more challenging properties to retrieve from space-based remote sensing measurements. In particular, optically thin upper tropospheric ice clouds (cirrus) have been especially challenging due to their tenuous nature, extensive spatial scales, and complex particle shapes and light scattering characteristics. The lack of independent validation motivates the investigation presented in this paper, wherein systematic biases between MODIS Collection 5 (C5) and CALIOP Version 3 (V3) unconstrained retrievals of tenuous IOT (< 3) are examined using a month of collocated A-Train observations. An initial comparison revealed a factor of two bias between the MODIS and CALIOP IOT retrievals. This bias is investigated using an infrared (IR) radiative closure approach that compares both products with MODIS IR cirrus retrievals developed for this assessment. The analysis finds that both the MODIS C5 and the unconstrained CALIOP V3 retrievals are biased (high and low, respectively) relative to the IR IOT retrievals. Based on this finding, the MODIS and CALIOP algorithms are investigated with the goal of explaining and minimizing the biases relative to the IR. For MODIS we find that the assumed ice single scattering properties used for the C5 retrievals are not consistent with the mean IR COT distribution. The C5 ice scattering database results in the asymmetry parameter (g) varying as a function of effective radius with mean values that are too large. The MODIS retrievals have been brought into agreement with the IR by adopting a new ice scattering model for Collection 6 (C6) consisting of a modified gamma distribution comprised of a single habit (severely roughened aggregated columns); the C6 ice cloud optical property models have a constant g ~ 0.75 in the mid-visible spectrum, 5-15 % smaller than C5. For CALIOP, the assumed lidar ratio for unconstrained retrievals is fixed at 25 sr for the V3 data products. This value is found to be inconsistent with the constrained (predominantly nighttime) CALIOP retrievals. An experimental data set was produced using a modified lidar ratio of 32 sr for the unconstrained retrievals (an increase of 28 %), selected to provide consistency with the constrained V3 results. These modifications greatly improve the agreement with the IR and provide consistency between the MODIS and CALIOP products. Based on these results the recently released MODIS C6 optical products use the single habit distribution given above, while the upcoming CALIOP V4 unconstrained algorithm will use higher lidar ratios for unconstrained retrievals.

  6. Resolving Ice Cloud Optical Thickness Biases Between CALIOP and MODIS Using Infrared Retrievals

    NASA Technical Reports Server (NTRS)

    Holz, R. E.; Platnick, S.; Meyer, K.; Vaughan, M.; Heidinger, A.; Yang, P.; Wind, G.; Dutcher, S.; Ackerman, S.; Amarasinghe, N.; hide

    2015-01-01

    Despite its importance as one of the key radiative properties that determines the impact of upper tropospheric clouds on the radiation balance, ice cloud optical thickness (IOT) has proven to be one of the more challenging properties to retrieve from space-based remote sensing measurements. In particular, optically thin upper tropospheric ice clouds (cirrus) have been especially challenging due to their tenuous nature, extensive spatial scales, and complex particle shapes and light scattering characteristics. The lack of independent validation motivates the investigation presented in this paper, wherein systematic biases between MODIS Collection 5 (C5) and CALIOP Version 3 (V3) unconstrained retrievals of tenuous IOT (< 3) are examined using a month of collocated A-Train observations. An initial comparison revealed a factor of two bias between the MODIS and CALIOP IOT retrievals. This bias is investigated using an infrared (IR) radiative closure approach that compares both products with MODIS IR cirrus retrievals developed for this assessment. The analysis finds that both the MODIS C5 and the unconstrained CALIOP V3 retrievals are biased (high and low, respectively) relative to the IR IOT retrievals. Based on this finding, the MODIS and CALIOP algorithms are investigated with the goal of explaining and minimizing the biases relative to the IR. For MODIS we find that the assumed ice single scattering properties used for the C5 retrievals are not consistent with the mean IR COT distribution. The C5 ice scattering database results in the asymmetry parameter (g) varying as a function of effective radius with mean values that are too large. The MODIS retrievals have been brought into agreement with the IR by adopting a new ice scattering model for Collection 6 (C6) consisting of a modified gamma distribution comprised of a single habit (severely roughened aggregated columns); the C6 ice cloud optical property models have a constant g approx. = 0.75 in the mid-visible spectrum, 5-15% smaller than C5. For CALIOP, the assumed lidar ratio for unconstrained retrievals is fixed at 25 sr for the V3 data products.This value is found to be inconsistent with the constrained (predominantly nighttime) CALIOP retrievals. An experimental data set was produced using a modified lidar ratio of 32 sr for the unconstrained retrievals (an increase of 28%), selected to provide consistency with the constrained V3 results. These modifications greatly improve the agreement with the IR and provide consistency between the MODIS and CALIOP products. Based on these results the recently released MODIS C6 optical products use the single habit distribution given above, while the upcoming CALIOP V4 unconstrained algorithm will use higher lidar ratios for unconstrained retrievals.

  7. Resolving ice cloud optical thickness biases between CALIOP and MODIS using infrared retrievals

    NASA Astrophysics Data System (ADS)

    Holz, Robert E.; Platnick, Steven; Meyer, Kerry; Vaughan, Mark; Heidinger, Andrew; Yang, Ping; Wind, Gala; Dutcher, Steven; Ackerman, Steven; Amarasinghe, Nandana; Nagle, Fredrick; Wang, Chenxi

    2016-04-01

    Despite its importance as one of the key radiative properties that determines the impact of upper tropospheric clouds on the radiation balance, ice cloud optical thickness (IOT) has proven to be one of the more challenging properties to retrieve from space-based remote sensing measurements. In particular, optically thin upper tropospheric ice clouds (cirrus) have been especially challenging due to their tenuous nature, extensive spatial scales, and complex particle shapes and light-scattering characteristics. The lack of independent validation motivates the investigation presented in this paper, wherein systematic biases between MODIS Collection 5 (C5) and CALIOP Version 3 (V3) unconstrained retrievals of tenuous IOT (< 3) are examined using a month of collocated A-Train observations. An initial comparison revealed a factor of 2 bias between the MODIS and CALIOP IOT retrievals. This bias is investigated using an infrared (IR) radiative closure approach that compares both products with MODIS IR cirrus retrievals developed for this assessment. The analysis finds that both the MODIS C5 and the unconstrained CALIOP V3 retrievals are biased (high and low, respectively) relative to the IR IOT retrievals. Based on this finding, the MODIS and CALIOP algorithms are investigated with the goal of explaining and minimizing the biases relative to the IR. For MODIS we find that the assumed ice single-scattering properties used for the C5 retrievals are not consistent with the mean IR COT distribution. The C5 ice scattering database results in the asymmetry parameter (g) varying as a function of effective radius with mean values that are too large. The MODIS retrievals have been brought into agreement with the IR by adopting a new ice scattering model for Collection 6 (C6) consisting of a modified gamma distribution comprised of a single habit (severely roughened aggregated columns); the C6 ice cloud optical property models have a constant g ≈ 0.75 in the mid-visible spectrum, 5-15 % smaller than C5. For CALIOP, the assumed lidar ratio for unconstrained retrievals is fixed at 25 sr for the V3 data products. This value is found to be inconsistent with the constrained (predominantly nighttime) CALIOP retrievals. An experimental data set was produced using a modified lidar ratio of 32 sr for the unconstrained retrievals (an increase of 28 %), selected to provide consistency with the constrained V3 results. These modifications greatly improve the agreement with the IR and provide consistency between the MODIS and CALIOP products. Based on these results the recently released MODIS C6 optical products use the single-habit distribution given above, while the upcoming CALIOP V4 unconstrained algorithm will use higher lidar ratios for unconstrained retrievals.

  8. Increase of Gas-Turbine Plant Efficiency by Optimizing Operation of Compressors

    NASA Astrophysics Data System (ADS)

    Matveev, V.; Goriachkin, E.; Volkov, A.

    2018-01-01

    The article presents optimization method for improving of the working process of axial compressors of gas turbine engines. Developed method allows to perform search for the best geometry of compressor blades automatically by using optimization software IOSO and CFD software NUMECA Fine/Turbo. The calculation of the compressor parameters was performed for work and stall point of its performance map on each optimization step. Study was carried out for seven-stage high-pressure compressor and three-stage low-pressure compressors. As a result of optimization, improvement of efficiency was achieved for all investigated compressors.

  9. Application of a neural network to simulate analysis in an optimization process

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Lamarsh, William J., II

    1992-01-01

    A new experimental software package called NETS/PROSSS aimed at reducing the computing time required to solve a complex design problem is described. The software combines a neural network for simulating the analysis program with an optimization program. The neural network is applied to approximate results of a finite element analysis program to quickly obtain a near-optimal solution. Results of the NETS/PROSSS optimization process can also be used as an initial design in a normal optimization process and make it possible to converge to an optimum solution with significantly fewer iterations.

  10. Digital radiography: optimization of image quality and dose using multi-frequency software.

    PubMed

    Precht, H; Gerke, O; Rosendahl, K; Tingberg, A; Waaler, D

    2012-09-01

    New developments in processing of digital radiographs (DR), including multi-frequency processing (MFP), allow optimization of image quality and radiation dose. This is particularly promising in children as they are believed to be more sensitive to ionizing radiation than adults. To examine whether the use of MFP software reduces the radiation dose without compromising quality at DR of the femur in 5-year-old-equivalent anthropomorphic and technical phantoms. A total of 110 images of an anthropomorphic phantom were imaged on a DR system (Canon DR with CXDI-50 C detector and MLT[S] software) and analyzed by three pediatric radiologists using Visual Grading Analysis. In addition, 3,500 images taken of a technical contrast-detail phantom (CDRAD 2.0) provide an objective image-quality assessment. Optimal image-quality was maintained at a dose reduction of 61% with MLT(S) optimized images. Even for images of diagnostic quality, MLT(S) provided a dose reduction of 88% as compared to the reference image. Software impact on image quality was found significant for dose (mAs), dynamic range dark region and frequency band. By optimizing image processing parameters, a significant dose reduction is possible without significant loss of image quality.

  11. Optimization of an interactive distributive computer network

    NASA Technical Reports Server (NTRS)

    Frederick, V.

    1985-01-01

    The activities under a cooperative agreement for the development of a computer network are briefly summarized. Research activities covered are: computer operating systems optimization and integration; software development and implementation of the IRIS (Infrared Imaging of Shuttle) Experiment; and software design, development, and implementation of the APS (Aerosol Particle System) Experiment.

  12. Software Partitioning Schemes for Advanced Simulation Computer Systems. Final Report.

    ERIC Educational Resources Information Center

    Clymer, S. J.

    Conducted to design software partitioning techniques for use by the Air Force to partition a large flight simulator program for optimal execution on alternative configurations, this study resulted in a mathematical model which defines characteristics for an optimal partition, and a manually demonstrated partitioning algorithm design which…

  13. Integrating model behavior, optimization, and sensitivity/uncertainty analysis: overview and application of the MOUSE software toolbox

    USDA-ARS?s Scientific Manuscript database

    This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...

  14. Multibody dynamical modeling for spacecraft docking process with spring-damper buffering device: A new validation approach

    NASA Astrophysics Data System (ADS)

    Daneshjou, Kamran; Alibakhshi, Reza

    2018-01-01

    In the current manuscript, the process of spacecraft docking, as one of the main risky operations in an on-orbit servicing mission, is modeled based on unconstrained multibody dynamics. The spring-damper buffering device is utilized here in the docking probe-cone system for micro-satellites. Owing to the impact occurs inevitably during docking process and the motion characteristics of multibody systems are remarkably affected by this phenomenon, a continuous contact force model needs to be considered. Spring-damper buffering device, keeping the spacecraft stable in an orbit when impact occurs, connects a base (cylinder) inserted in the chaser satellite and the end of docking probe. Furthermore, by considering a revolute joint equipped with torsional shock absorber, between base and chaser satellite, the docking probe can experience both translational and rotational motions simultaneously. Although spacecraft docking process accompanied by the buffering mechanisms may be modeled by constrained multibody dynamics, this paper deals with a simple and efficient formulation to eliminate the surplus generalized coordinates and solve the impact docking problem based on unconstrained Lagrangian mechanics. By an example problem, first, model verification is accomplished by comparing the computed results with those recently reported in the literature. Second, according to a new alternative validation approach, which is based on constrained multibody problem, the accuracy of presented model can be also evaluated. This proposed verification approach can be applied to indirectly solve the constrained multibody problems by minimum required effort. The time history of impact force, the influence of system flexibility and physical interaction between shock absorber and penetration depth caused by impact are the issues followed in this paper. Third, the MATLAB/SIMULINK multibody dynamic analysis software will be applied to build impact docking model to validate computed results and then, investigate the trajectories of both satellites to take place the successful capture process.

  15. Free Swimming in Ground Effect

    NASA Astrophysics Data System (ADS)

    Cochran-Carney, Jackson; Wagenhoffer, Nathan; Zeyghami, Samane; Moored, Keith

    2017-11-01

    A free-swimming potential flow analysis of unsteady ground effect is conducted for two-dimensional airfoils via a method of images. The foils undergo a pure pitching motion about their leading edge, and the positions of the body in the streamwise and cross-stream directions are determined by the equations of motion of the body. It is shown that the unconstrained swimmer is attracted to a time-averaged position that is mediated by the flow interaction with the ground. The robustness of this fluid-mediated equilibrium position is probed by varying the non-dimensional mass, initial conditions and kinematic parameters of motion. Comparisons to the foil's fixed-motion counterpart are also made to pinpoint the effect that free swimming near the ground has on wake structures and the fluid-mediated forces over time. Optimal swimming regimes for near-boundary swimming are determined by examining asymmetric motions.

  16. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  17. A Path Algorithm for Constrained Estimation

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2013-01-01

    Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties!are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online. PMID:24039382

  18. GeMS: an advanced software package for designing synthetic genes.

    PubMed

    Jayaraj, Sebastian; Reid, Ralph; Santi, Daniel V

    2005-01-01

    A user-friendly, advanced software package for gene design is described. The software comprises an integrated suite of programs-also provided as stand-alone tools-that automatically performs the following tasks in gene design: restriction site prediction, codon optimization for any expression host, restriction site inclusion and exclusion, separation of long sequences into synthesizable fragments, T(m) and stem-loop determinations, optimal oligonucleotide component design and design verification/error-checking. The output is a complete design report and a list of optimized oligonucleotides to be prepared for subsequent gene synthesis. The user interface accommodates both inexperienced and experienced users. For inexperienced users, explanatory notes are provided such that detailed instructions are not necessary; for experienced users, a streamlined interface is provided without such notes. The software has been extensively tested in the design and successful synthesis of over 400 kb of genes, many of which exceeded 5 kb in length.

  19. RARtool: A MATLAB Software Package for Designing Response-Adaptive Randomized Clinical Trials with Time-to-Event Outcomes.

    PubMed

    Ryeznik, Yevgen; Sverdlov, Oleksandr; Wong, Weng Kee

    2015-08-01

    Response-adaptive randomization designs are becoming increasingly popular in clinical trial practice. In this paper, we present RARtool , a user interface software developed in MATLAB for designing response-adaptive randomized comparative clinical trials with censored time-to-event outcomes. The RARtool software can compute different types of optimal treatment allocation designs, and it can simulate response-adaptive randomization procedures targeting selected optimal allocations. Through simulations, an investigator can assess design characteristics under a variety of experimental scenarios and select the best procedure for practical implementation. We illustrate the utility of our RARtool software by redesigning a survival trial from the literature.

  20. Interactive software tool to comprehend the calculation of optimal sequence alignments with dynamic programming.

    PubMed

    Ibarra, Ignacio L; Melo, Francisco

    2010-07-01

    Dynamic programming (DP) is a general optimization strategy that is successfully used across various disciplines of science. In bioinformatics, it is widely applied in calculating the optimal alignment between pairs of protein or DNA sequences. These alignments form the basis of new, verifiable biological hypothesis. Despite its importance, there are no interactive tools available for training and education on understanding the DP algorithm. Here, we introduce an interactive computer application with a graphical interface, for the purpose of educating students about DP. The program displays the DP scoring matrix and the resulting optimal alignment(s), while allowing the user to modify key parameters such as the values in the similarity matrix, the sequence alignment algorithm version and the gap opening/extension penalties. We hope that this software will be useful to teachers and students of bioinformatics courses, as well as researchers who implement the DP algorithm for diverse applications. The software is freely available at: http:/melolab.org/sat. The software is written in the Java computer language, thus it runs on all major platforms and operating systems including Windows, Mac OS X and LINUX. All inquiries or comments about this software should be directed to Francisco Melo at fmelo@bio.puc.cl.

  1. Structural brain correlates of unconstrained motor activity in people with schizophrenia.

    PubMed

    Farrow, Tom F D; Hunter, Michael D; Wilkinson, Iain D; Green, Russell D J; Spence, Sean A

    2005-11-01

    Avolition affects quality of life in chronic schizophrenia. We investigated the relationship between unconstrained motor activity and the volume of key executive brain regions in 16 male patients with schizophrenia. Wristworn actigraphy monitors were used to record motor activity over a 20 h period. Structural magnetic resonance imaging brain scans were parcellated and individual volumes for anterior cingulate cortex and dorsolateral prefrontal cortex extracted. Patients'total activity was positively correlated with volume of left anterior cingulate cortex. These data suggest that the volume of specific executive structures may affect (quantifiable) motor behaviours, having further implications for models of the 'will' and avolition.

  2. Identification of terms to define unconstrained air transportation demands

    NASA Technical Reports Server (NTRS)

    Jacobson, I. D.; Kuhilhau, A. R.

    1982-01-01

    The factors involved in the evaluation of unconstrained air transportation systems were carefully analyzed. By definition an unconstrained system is taken to be one in which the design can employ innovative and advanced concepts no longer limited by present environmental, social, political or regulatory settings. Four principal evaluation criteria are involved: (1) service utilization, based on the operating performance characteristics as viewed by potential patrons; (2) community impacts, reflecting decisions based on the perceived impacts of the system; (3) technological feasibility, estimating what is required to reduce the system to practice; and (4) financial feasibility, predicting the ability of the concepts to attract financial support. For each of these criteria, a set of terms or descriptors was identified, which should be used in the evaluation to render it complete. It is also demonstrated that these descriptors have the following properties: (a) their interpretation may be made by different groups of evaluators; (b) their interpretations and the way they are used may depend on the stage of development of the system in which they are used; (c) in formulating the problem, all descriptors should be addressed independent of the evaluation technique selected.

  3. A LAGRANGIAN GAUSS-NEWTON-KRYLOV SOLVER FOR MASS- AND INTENSITY-PRESERVING DIFFEOMORPHIC IMAGE REGISTRATION.

    PubMed

    Mang, Andreas; Ruthotto, Lars

    2017-01-01

    We present an efficient solver for diffeomorphic image registration problems in the framework of Large Deformations Diffeomorphic Metric Mappings (LDDMM). We use an optimal control formulation, in which the velocity field of a hyperbolic PDE needs to be found such that the distance between the final state of the system (the transformed/transported template image) and the observation (the reference image) is minimized. Our solver supports both stationary and non-stationary (i.e., transient or time-dependent) velocity fields. As transformation models, we consider both the transport equation (assuming intensities are preserved during the deformation) and the continuity equation (assuming mass-preservation). We consider the reduced form of the optimal control problem and solve the resulting unconstrained optimization problem using a discretize-then-optimize approach. A key contribution is the elimination of the PDE constraint using a Lagrangian hyperbolic PDE solver. Lagrangian methods rely on the concept of characteristic curves. We approximate these curves using a fourth-order Runge-Kutta method. We also present an efficient algorithm for computing the derivatives of the final state of the system with respect to the velocity field. This allows us to use fast Gauss-Newton based methods. We present quickly converging iterative linear solvers using spectral preconditioners that render the overall optimization efficient and scalable. Our method is embedded into the image registration framework FAIR and, thus, supports the most commonly used similarity measures and regularization functionals. We demonstrate the potential of our new approach using several synthetic and real world test problems with up to 14.7 million degrees of freedom.

  4. A method to combine hydrodynamics and constructive design in the optimization of the runner blades of Kaplan turbines

    NASA Astrophysics Data System (ADS)

    Miclosina, C. O.; Balint, D. I.; Campian, C. V.; Frunzaverde, D.; Ion, I.

    2012-11-01

    This paper deals with the optimization of the axial hydraulic turbines of Kaplan type. The optimization of the runner blade is presented systematically from two points of view: hydrodynamic and constructive. Combining these aspects in order to gain a safer operation when unsteady effects occur in the runner of the turbine is attempted. The design and optimization of the runner blade is performed with QTurbo3D software developed at the Center for Research in Hydraulics, Automation and Thermal Processes (CCHAPT) from "Eftimie Murgu" University of Resita, Romania. QTurbo3D software offers possibilities to design the meridian channel of hydraulic turbines design the blades and optimize the runner blade. 3D modeling and motion analysis of the runner blade operating mechanism are accomplished using SolidWorks software. The purpose of motion study is to obtain forces, torques or stresses in the runner blade operating mechanism, necessary to estimate its lifetime. This paper clearly states the importance of combining the hydrodynamics with the structural design in the optimization procedure of the runner of hydraulic turbines.

  5. Learning optimal eye movements to unusual faces

    PubMed Central

    Peterson, Matthew F.; Eckstein, Miguel P.

    2014-01-01

    Eye movements, which guide the fovea’s high resolution and computational power to relevant areas of the visual scene, are integral to efficient, successful completion of many visual tasks. How humans modify their eye movements through experience with their perceptual environments, and its functional role in learning new tasks, has not been fully investigated. Here, we used a face identification task where only the mouth discriminated exemplars to assess if, how, and when eye movement modulation may mediate learning. By interleaving trials of unconstrained eye movements with trials of forced fixation, we attempted to separate the contributions of eye movements and covert mechanisms to performance improvements. Without instruction, a majority of observers substantially increased accuracy and learned to direct their initial eye movements towards the optimal fixation point. The proximity of an observer’s default face identification eye movement behavior to the new optimal fixation point and the observer’s peripheral processing ability were predictive of performance gains and eye movement learning. After practice in a subsequent condition in which observers were directed to fixate different locations along the face, including the relevant mouth region, all observers learned to make eye movements to the optimal fixation point. In this fully learned state, augmented fixation strategy accounted for 43% of total efficiency improvements while covert mechanisms accounted for the remaining 57%. The findings suggest a critical role for eye movement planning to perceptual learning, and elucidate factors that can predict when and how well an observer can learn a new task with unusual exemplars. PMID:24291712

  6. Intervention in gene regulatory networks with maximal phenotype alteration.

    PubMed

    Yousefi, Mohammadmahdi R; Dougherty, Edward R

    2013-07-15

    A basic issue for translational genomics is to model gene interaction via gene regulatory networks (GRNs) and thereby provide an informatics environment to study the effects of intervention (say, via drugs) and to derive effective intervention strategies. Taking the view that the phenotype is characterized by the long-run behavior (steady-state distribution) of the network, we desire interventions to optimally move the probability mass from undesirable to desirable states Heretofore, two external control approaches have been taken to shift the steady-state mass of a GRN: (i) use a user-defined cost function for which desirable shift of the steady-state mass is a by-product and (ii) use heuristics to design a greedy algorithm. Neither approach provides an optimal control policy relative to long-run behavior. We use a linear programming approach to optimally shift the steady-state mass from undesirable to desirable states, i.e. optimization is directly based on the amount of shift and therefore must outperform previously proposed methods. Moreover, the same basic linear programming structure is used for both unconstrained and constrained optimization, where in the latter case, constraints on the optimization limit the amount of mass that may be shifted to 'ambiguous' states, these being states that are not directly undesirable relative to the pathology of interest but which bear some perceived risk. We apply the method to probabilistic Boolean networks, but the theory applies to any Markovian GRN. Supplementary materials, including the simulation results, MATLAB source code and description of suboptimal methods are available at http://gsp.tamu.edu/Publications/supplementary/yousefi13b. edward@ece.tamu.edu Supplementary data are available at Bioinformatics online.

  7. Dynamic optimization case studies in DYNOPT tool

    NASA Astrophysics Data System (ADS)

    Ozana, Stepan; Pies, Martin; Docekal, Tomas

    2016-06-01

    Dynamic programming is typically applied to optimization problems. As the analytical solutions are generally very difficult, chosen software tools are used widely. These software packages are often third-party products bound for standard simulation software tools on the market. As typical examples of such tools, TOMLAB and DYNOPT could be effectively applied for solution of problems of dynamic programming. DYNOPT will be presented in this paper due to its licensing policy (free product under GPL) and simplicity of use. DYNOPT is a set of MATLAB functions for determination of optimal control trajectory by given description of the process, the cost to be minimized, subject to equality and inequality constraints, using orthogonal collocation on finite elements method. The actual optimal control problem is solved by complete parameterization both the control and the state profile vector. It is assumed, that the optimized dynamic model may be described by a set of ordinary differential equations (ODEs) or differential-algebraic equations (DAEs). This collection of functions extends the capability of the MATLAB Optimization Tool-box. The paper will introduce use of DYNOPT in the field of dynamic optimization problems by means of case studies regarding chosen laboratory physical educational models.

  8. Automating Structural Analysis of Spacecraft Vehicles

    NASA Technical Reports Server (NTRS)

    Hrinda, Glenn A.

    2004-01-01

    A major effort within NASA's vehicle analysis discipline has been to automate structural analysis and sizing optimization during conceptual design studies of advanced spacecraft. Traditional spacecraft structural sizing has involved detailed finite element analysis (FEA) requiring large degree-of-freedom (DOF) finite element models (FEM). Creation and analysis of these models can be time consuming and limit model size during conceptual designs. The goal is to find an optimal design that meets the mission requirements but produces the lightest structure. A structural sizing tool called HyperSizer has been successfully used in the conceptual design phase of a reusable launch vehicle and planetary exploration spacecraft. The program couples with FEA to enable system level performance assessments and weight predictions including design optimization of material selections and sizing of spacecraft members. The software's analysis capabilities are based on established aerospace structural methods for strength, stability and stiffness that produce adequately sized members and reliable structural weight estimates. The software also helps to identify potential structural deficiencies early in the conceptual design so changes can be made without wasted time. HyperSizer's automated analysis and sizing optimization increases productivity and brings standardization to a systems study. These benefits will be illustrated in examining two different types of conceptual spacecraft designed using the software. A hypersonic air breathing, single stage to orbit (SSTO), reusable launch vehicle (RLV) will be highlighted as well as an aeroshell for a planetary exploration vehicle used for aerocapture at Mars. By showing the two different types of vehicles, the software's flexibility will be demonstrated with an emphasis on reducing aeroshell structural weight. Member sizes, concepts and material selections will be discussed as well as analysis methods used in optimizing the structure. Analysis based on the HyperSizer structural sizing software will be discussed. Design trades required to optimize structural weight will be presented.

  9. Multidisciplinary Concurrent Design Optimization via the Internet

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Kelkar, Atul G.; Koganti, Gopichand

    2001-01-01

    A methodology is presented which uses commercial design and analysis software and the Internet to perform concurrent multidisciplinary optimization. The methodology provides a means to develop multidisciplinary designs without requiring that all software be accessible from the same local network. The procedures are amenable to design and development teams whose members, expertise and respective software are not geographically located together. This methodology facilitates multidisciplinary teams working concurrently on a design problem of common interest. Partition of design software to different machines allows each constituent software to be used on the machine that provides the most economy and efficiency. The methodology is demonstrated on the concurrent design of a spacecraft structure and attitude control system. Results are compared to those derived from performing the design with an autonomous FORTRAN program.

  10. A Visual Basic simulation software tool for performance analysis of a membrane-based advanced water treatment plant.

    PubMed

    Pal, P; Kumar, R; Srivastava, N; Chaudhuri, J

    2014-02-01

    A Visual Basic simulation software (WATTPPA) has been developed to analyse the performance of an advanced wastewater treatment plant. This user-friendly and menu-driven software is based on the dynamic mathematical model for an industrial wastewater treatment scheme that integrates chemical, biological and membrane-based unit operations. The software-predicted results corroborate very well with the experimental findings as indicated in the overall correlation coefficient of the order of 0.99. The software permits pre-analysis and manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. It allows quick performance analysis of the whole system as well as the individual units. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for hazardous wastewater.

  11. Analyse et design aerodynamique haute-fidelite de l'integration moteur sur un avion BWB

    NASA Astrophysics Data System (ADS)

    Mirzaei Amirabad, Mojtaba

    BWB (Blended Wing Body) is an innovative type of aircraft based on the flying wing concept. In this configuration, the wing and the fuselage are blended together smoothly. BWB offers economical and environmental advantages by reducing fuel consumption through improving aerodynamic performance. In this project, the goal is to improve the aerodynamic performance by optimizing the main body of BWB that comes from conceptual design. The high fidelity methods applied in this project have been less frequently addressed in the literature. This research develops an automatic optimization procedure in order to reduce the drag force on the main body. The optimization is carried out in two main stages: before and after engine installation. Our objective is to minimize the drag by taking into account several constraints in high fidelity optimization. The commercial software, Isight is chosen as an optimizer in which MATLAB software is called to start the optimization process. Geometry is generated using ANSYS-DesignModeler, unstructured mesh is created by ANSYS-Mesh and CFD calculations are done with the help of ANSYS-Fluent. All of these software are coupled together in ANSYS-Workbench environment which is called by MATLAB. The high fidelity methods are used during optimization by solving Navier-Stokes equations. For verifying the results, a finer structured mesh is created by ICEM software to be used in each stage of optimization. The first stage includes a 3D optimization on the surface of the main body, before adding the engine. The optimized case is then used as an input for the second stage in which the nacelle is added. It could be concluded that this study leads us to obtain appropriate reduction in drag coefficient for BWB without nacelle. In the second stage (adding the nacelle) a drag minimization is also achieved by performing a local optimization. Furthermore, the flow separation, created in the main body-nacelle zone, is reduced.

  12. A Systematic Software, Firmware, and Hardware Codesign Methodology for Digital Signal Processing

    DTIC Science & Technology

    2014-03-01

    possible mappings ...................................................60 Table 25. Possible optimal leaf -nodes... size weight and power UAV unmanned aerial vehicle UHF ultra-high frequency UML universal modeling language Verilog verify logic VHDL VHSIC...optimal leaf -nodes to some design patterns for embedded system design. Software and hardware partitioning is a very difficult challenge in the field of

  13. Direct Method Transcription for a Human-Class Translunar Injection Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Witzberger, Kevin E.; Zeiler, Tom

    2012-01-01

    This paper presents a new trajectory optimization software package developed in the framework of a low-to-high fidelity 3 degrees-of-freedom (DOF)/6-DOF vehicle simulation program named Mission Analysis Simulation Tool in Fortran (MASTIF) and its application to a translunar trajectory optimization problem. The functionality of the developed optimization package is implemented as a new "mode" in generalized settings to make it applicable for a general trajectory optimization problem. In doing so, a direct optimization method using collocation is employed for solving the problem. Trajectory optimization problems in MASTIF are transcribed to a constrained nonlinear programming (NLP) problem and solved with SNOPT, a commercially available NLP solver. A detailed description of the optimization software developed is provided as well as the transcription specifics for the translunar injection (TLI) problem. The analysis includes a 3-DOF trajectory TLI optimization and a 3-DOF vehicle TLI simulation using closed-loop guidance.

  14. Simulation and Optimization Methods for Assessing the Impact of Aviation Operations on the Environment

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Chen, Neil; Ng, Hok K.

    2010-01-01

    There is increased awareness of anthropogenic factors affecting climate change and urgency to slow the negative impact. Greenhouse gases, oxides of Nitrogen and contrails resulting from aviation affect the climate in different and uncertain ways. This paper develops a flexible simulation and optimization software architecture to study the trade-offs involved in reducing emissions. The software environment is used to conduct analysis of two approaches for avoiding contrails using the concepts of contrail frequency index and optimal avoidance trajectories.

  15. A software methodology for compiling quantum programs

    NASA Astrophysics Data System (ADS)

    Häner, Thomas; Steiger, Damian S.; Svore, Krysta; Troyer, Matthias

    2018-04-01

    Quantum computers promise to transform our notions of computation by offering a completely new paradigm. To achieve scalable quantum computation, optimizing compilers and a corresponding software design flow will be essential. We present a software architecture for compiling quantum programs from a high-level language program to hardware-specific instructions. We describe the necessary layers of abstraction and their differences and similarities to classical layers of a computer-aided design flow. For each layer of the stack, we discuss the underlying methods for compilation and optimization. Our software methodology facilitates more rapid innovation among quantum algorithm designers, quantum hardware engineers, and experimentalists. It enables scalable compilation of complex quantum algorithms and can be targeted to any specific quantum hardware implementation.

  16. Optimal reproducibility of gated sestamibi and thallium myocardial perfusion study left ventricular ejection fractions obtained on a solid-state CZT cardiac camera requires operator input.

    PubMed

    Cherk, Martin H; Ky, Jason; Yap, Kenneth S K; Campbell, Patrina; McGrath, Catherine; Bailey, Michael; Kalff, Victor

    2012-08-01

    To evaluate the reproducibility of serial re-acquisitions of gated Tl-201 and Tc-99m sestamibi left ventricular ejection fraction (LVEF) measurements obtained on a new generation solid-state cardiac camera system during myocardial perfusion imaging and the importance of manual operator optimization of left ventricular wall tracking. Resting blinded automated (auto) and manual operator optimized (opt) LVEF measurements were measured using ECT toolbox (ECT) and Cedars-Sinai QGS software in two separate cohorts of 55 Tc-99m sestamibi (MIBI) and 50 thallium (Tl-201) myocardial perfusion studies (MPS) acquired in both supine and prone positions on a cadmium zinc telluride (CZT) solid-state camera system. Resting supine and prone automated LVEF measurements were similarly obtained in a further separate cohort of 52 gated cardiac blood pool scans (GCBPS) for validation of methodology and comparison. Appropriate use of Bland-Altman, chi-squared and Levene's equality of variance tests was used to analyse the resultant data comparisons. For all radiotracer and software combinations, manual checking and optimization of valve planes (+/- centre radius with ECT software) resulted in significant improvement in MPS LVEF reproducibility that approached that of planar GCBPS. No difference was demonstrated between optimized MIBI/Tl-201 QGS and planar GCBPS LVEF reproducibility (P = .17 and P = .48, respectively). ECT required significantly more manual optimization compared to QGS software in both supine and prone positions independent of radiotracer used (P < .02). Reproducibility of gated sestamibi and Tl-201 LVEF measurements obtained during myocardial perfusion imaging with ECT toolbox or QGS software packages using a new generation solid-state cardiac camera with improved image quality approaches that of planar GCBPS however requires visual quality control and operator optimization of left ventricular wall tracking for best results. Using this superior cardiac technology, Tl-201 reproducibility also appears at least equivalent to sestamibi for measuring LVEF.

  17. Image Registration and Data Assimilation as a QUBO on the D-Wave Quantum Annealer

    NASA Astrophysics Data System (ADS)

    Pelissier, C.; LeMoigne, J.; Halem, M.; Simpson, D. G.; Clune, T.

    2016-12-01

    The advent of the commercially available D-Wave quantum annealer has for the first time allowed investigations of the potential of quantum effects to efficiently carry out certain numerical tasks. The D-Wave computer was initially promoted as a tool to solve Quadratic Unconstrained Binary Optimization problems (QUBOs), but currently, it is also being used to generate the Boltzmann statistics required to train Restricted Boltzmann machines (RBMs). We consider the potential of this new architecture in performing numerical computations required to estimate terrestrial carbon fluxes from OCO-2 observations using the LIS model. The use of RBMs is being investigated in this work, but here we focus on the D-Wave as a QUBO solver, and it's potential to carry out image registration and data assimilation. QUBOs are formulated for both problems and results generated using the D-Wave 2Xtm at the NAS supercomputing facility are presented.

  18. Allocating dissipation across a molecular machine cycle to maximize flux

    PubMed Central

    Brown, Aidan I.; Sivak, David A.

    2017-01-01

    Biomolecular machines consume free energy to break symmetry and make directed progress. Nonequilibrium ATP concentrations are the typical free energy source, with one cycle of a molecular machine consuming a certain number of ATP, providing a fixed free energy budget. Since evolution is expected to favor rapid-turnover machines that operate efficiently, we investigate how this free energy budget can be allocated to maximize flux. Unconstrained optimization eliminates intermediate metastable states, indicating that flux is enhanced in molecular machines with fewer states. When maintaining a set number of states, we show that—in contrast to previous findings—the flux-maximizing allocation of dissipation is not even. This result is consistent with the coexistence of both “irreversible” and reversible transitions in molecular machine models that successfully describe experimental data, which suggests that, in evolved machines, different transitions differ significantly in their dissipation. PMID:29073016

  19. Distance majorization and its applications

    PubMed Central

    Chi, Eric C.; Zhou, Hua; Lange, Kenneth

    2014-01-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563

  20. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  1. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  2. A sequential solution for anisotropic total variation image denoising with interval constraints

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Noo, Frédéric

    2017-09-01

    We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.

  3. A Machine Learns to Predict the Stability of Tightly Packed Planetary Systems

    NASA Astrophysics Data System (ADS)

    Tamayo, Daniel; Silburt, Ari; Valencia, Diana; Menou, Kristen; Ali-Dib, Mohamad; Petrovich, Cristobal; Huang, Chelsea X.; Rein, Hanno; van Laerhoven, Christa; Paradise, Adiv; Obertas, Alysa; Murray, Norman

    2016-12-01

    The requirement that planetary systems be dynamically stable is often used to vet new discoveries or set limits on unconstrained masses or orbital elements. This is typically carried out via computationally expensive N-body simulations. We show that characterizing the complicated and multi-dimensional stability boundary of tightly packed systems is amenable to machine-learning methods. We find that training an XGBoost machine-learning algorithm on physically motivated features yields an accurate classifier of stability in packed systems. On the stability timescale investigated (107 orbits), it is three orders of magnitude faster than direct N-body simulations. Optimized machine-learning classifiers for dynamical stability may thus prove useful across the discipline, e.g., to characterize the exoplanet sample discovered by the upcoming Transiting Exoplanet Survey Satellite. This proof of concept motivates investing computational resources to train algorithms capable of predicting stability over longer timescales and over broader regions of phase space.

  4. MM Algorithms for Geometric and Signomial Programming

    PubMed Central

    Lange, Kenneth; Zhou, Hua

    2013-01-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545

  5. MM Algorithms for Geometric and Signomial Programming.

    PubMed

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  6. σ -SCF: A Direct Energy-targeting Method To Mean-field Excited States

    NASA Astrophysics Data System (ADS)

    Ye, Hongzhou; Welborn, Matthew; Ricke, Nathan; van Voorhis, Troy

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g. Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF, tend to fall into the lowest solution consistent with a given symmetry - a problem known as ``variational collapse''. In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states - ground or excited - are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). This work was funded by a Grant from NSF (CHE-1464804).

  7. A case study in programming a quantum annealer for hard operational planning problems

    NASA Astrophysics Data System (ADS)

    Rieffel, Eleanor G.; Venturelli, Davide; O'Gorman, Bryan; Do, Minh B.; Prystay, Elicia M.; Smelyanskiy, Vadim N.

    2015-01-01

    We report on a case study in programming an early quantum annealer to attack optimization problems related to operational planning. While a number of studies have looked at the performance of quantum annealers on problems native to their architecture, and others have examined performance of select problems stemming from an application area, ours is one of the first studies of a quantum annealer's performance on parametrized families of hard problems from a practical domain. We explore two different general mappings of planning problems to quadratic unconstrained binary optimization (QUBO) problems, and apply them to two parametrized families of planning problems, navigation-type and scheduling-type. We also examine two more compact, but problem-type specific, mappings to QUBO, one for the navigation-type planning problems and one for the scheduling-type planning problems. We study embedding properties and parameter setting and examine their effect on the efficiency with which the quantum annealer solves these problems. From these results, we derive insights useful for the programming and design of future quantum annealers: problem choice, the mapping used, the properties of the embedding, and the annealing profile all matter, each significantly affecting the performance.

  8. Modeling and quantification of repolarization feature dependency on heart rate.

    PubMed

    Minchole, A; Zacur, E; Pueyo, E; Laguna, P

    2014-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.

  9. UDECON: deconvolution optimization software for restoring high-resolution records from pass-through paleomagnetic measurements

    NASA Astrophysics Data System (ADS)

    Xuan, Chuang; Oda, Hirokuni

    2015-11-01

    The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.

  10. Split Bregman's optimization method for image construction in compressive sensing

    NASA Astrophysics Data System (ADS)

    Skinner, D.; Foo, S.; Meyer-Bäse, A.

    2014-05-01

    The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.

  11. Engine With Regression and Neural Network Approximators Designed

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.

    2001-01-01

    At the NASA Glenn Research Center, the NASA engine performance program (NEPP, ref. 1) and the design optimization testbed COMETBOARDS (ref. 2) with regression and neural network analysis-approximators have been coupled to obtain a preliminary engine design methodology. The solution to a high-bypass-ratio subsonic waverotor-topped turbofan engine, which is shown in the preceding figure, was obtained by the simulation depicted in the following figure. This engine is made of 16 components mounted on two shafts with 21 flow stations. The engine is designed for a flight envelope with 47 operating points. The design optimization utilized both neural network and regression approximations, along with the cascade strategy (ref. 3). The cascade used three algorithms in sequence: the method of feasible directions, the sequence of unconstrained minimizations technique, and sequential quadratic programming. The normalized optimum thrusts obtained by the three methods are shown in the following figure: the cascade algorithm with regression approximation is represented by a triangle, a circle is shown for the neural network solution, and a solid line indicates original NEPP results. The solutions obtained from both approximate methods lie within one standard deviation of the benchmark solution for each operating point. The simulation improved the maximum thrust by 5 percent. The performance of the linear regression and neural network methods as alternate engine analyzers was found to be satisfactory for the analysis and operation optimization of air-breathing propulsion engines (ref. 4).

  12. Improvement of the cruise performances of a wing by means of aerodynamic optimization. Validation with a Far-Field method

    NASA Astrophysics Data System (ADS)

    Jiménez-Varona, J.; Ponsin Roca, J.

    2015-06-01

    Under a contract with AIRBUS MILITARY (AI-M), an exercise to analyze the potential of optimization techniques to improve the wing performances at cruise conditions has been carried out by using an in-house design code. The original wing was provided by AI-M and several constraints were posed for the redesign. To maximize the aerodynamic efficiency at cruise, optimizations were performed using the design techniques developed internally at INTA under a research program (Programa de Termofluidodinámica). The code is a gradient-based optimizaa tion code, which uses classical finite differences approach for gradient computations. Several techniques for search direction computation are implemented for unconstrained and constrained problems. Techniques for geometry modifications are based on different approaches which include perturbation functions for the thickness and/or mean line distributions and others by Bézier curves fitting of certain degree. It is very e important to afford a real design which involves several constraints that reduce significantly the feasible design space. And the assessment of the code is needed in order to check the capabilities and the possible drawbacks. Lessons learnt will help in the development of future enhancements. In addition, the validation of the results was done using also the well-known TAU flow solver and a far-field drag method in order to determine accurately the improvement in terms of drag counts.

  13. Software for Optimizing Plans Involving Interdependent Goals

    NASA Technical Reports Server (NTRS)

    Estlin, Tara; Gaines, Daniel; Rabideau, Gregg

    2005-01-01

    A computer program enables construction and optimization of plans for activities that are directed toward achievement of goals that are interdependent. Goal interdependence is defined as the achievement of one or more goals affecting the desirability or priority of achieving one or more other goals. This program is overlaid on the Automated Scheduling and Planning Environment (ASPEN) software system, aspects of which have been described in a number of prior NASA Tech Briefs articles. Unlike other known or related planning programs, this program considers interdependences among goals that can change between problems and provides a language for easily specifying such dependences. Specifications of the interdependences can be formulated dynamically and provided to the associated planning software as part of the goal input. Then an optimization algorithm provided by this program enables the planning software to reason about the interdependences and incorporate them into an overall objective function that it uses to rate the quality of a plan under construction and to direct its optimization search. In tests on a series of problems of planning geological experiments by a team of instrumented robotic vehicles (rovers) on new terrain, this program was found to enhance plan quality.

  14. ConcreteWorks v3 training/user manual (P1) : ConcreteWorks software (P2).

    DOT National Transportation Integrated Search

    2017-04-01

    ConcreteWorks is designed to be a user-friendly software package that can help concrete : professionals optimize concrete mixture proportioning, perform a concrete thermal analysis, and : increase the chloride diffusion service life. The software pac...

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, William; Laird, Carl; Siirola, John

    Pyomo provides a rich software environment for formulating and analyzing optimization applications. Pyomo supports the algebraic specification of complex sets of objectives and constraints, which enables optimization solvers to exploit problem structure to efficiently perform optimization.

  16. Optimal design of the rotor geometry of line-start permanent magnet synchronous motor using the bat algorithm

    NASA Astrophysics Data System (ADS)

    Knypiński, Łukasz

    2017-12-01

    In this paper an algorithm for the optimization of excitation system of line-start permanent magnet synchronous motors will be presented. For the basis of this algorithm, software was developed in the Borland Delphi environment. The software consists of two independent modules: an optimization solver, and a module including the mathematical model of a synchronous motor with a self-start ability. The optimization module contains the bat algorithm procedure. The mathematical model of the motor has been developed in an Ansys Maxwell environment. In order to determine the functional parameters of the motor, additional scripts in Visual Basic language were developed. Selected results of the optimization calculation are presented and compared with results for the particle swarm optimization algorithm.

  17. Method of interplanetary trajectory optimization for the spacecraft with low thrust and swing-bys

    NASA Astrophysics Data System (ADS)

    Konstantinov, M. S.; Thein, M.

    2017-07-01

    The method developed to avoid the complexity of solving the multipoint boundary value problem while optimizing interplanetary trajectories of the spacecraft with electric propulsion and a sequence of swing-bys is presented in the paper. This method is based on the use of the preliminary problem solutions for the impulsive trajectories. The preliminary problem analyzed at the first stage of the study is formulated so that the analysis and optimization of a particular flight path is considered as the unconstrained minimum in the space of the selectable parameters. The existing methods can effectively solve this problem and make it possible to identify rational flight paths (the sequence of swing-bys) to receive the initial approximation for the main characteristics of the flight path (dates, values of the hyperbolic excess velocity, etc.). These characteristics can be used to optimize the trajectory of the spacecraft with electric propulsion. The special feature of the work is the introduction of the second (intermediate) stage of the research. At this stage some characteristics of the analyzed flight path (e.g. dates of swing-bys) are fixed and the problem is formulated so that the trajectory of the spacecraft with electric propulsion is optimized on selected sites of the flight path. The end-to-end optimization is carried out at the third (final) stage of the research. The distinctive feature of this stage is the analysis of the full set of optimal conditions for the considered flight path. The analysis of the characteristics of the optimal flight trajectories to Jupiter with Earth, Venus and Mars swing-bys for the spacecraft with electric propulsion are presented. The paper shows that the spacecraft weighing more than 7150 kg can be delivered into the vicinity of Jupiter along the trajectory with two Earth swing-bys by use of the space transportation system based on the "Angara A5" rocket launcher, the chemical upper stage "KVTK" and the electric propulsion system with input electrical power of 100 kW.

  18. Source detection in astronomical images by Bayesian model comparison

    NASA Astrophysics Data System (ADS)

    Frean, Marcus; Friedlander, Anna; Johnston-Hollitt, Melanie; Hollitt, Christopher

    2014-12-01

    The next generation of radio telescopes will generate exabytes of data on hundreds of millions of objects, making automated methods for the detection of astronomical objects ("sources") essential. Of particular importance are faint, diffuse objects embedded in noise. There is a pressing need for source finding software that identifies these sources, involves little manual tuning, yet is tractable to calculate. We first give a novel image discretisation method that incorporates uncertainty about how an image should be discretised. We then propose a hierarchical prior for astronomical images, which leads to a Bayes factor indicating how well a given region conforms to a model of source that is exceptionally unconstrained, compared to a model of background. This enables the efficient localisation of regions that are "suspiciously different" from the background distribution, so our method looks not for brightness but for anomalous distributions of intensity, which is much more general. The model of background can be iteratively improved by removing the influence on it of sources as they are discovered. The approach is evaluated by identifying sources in real and simulated data, and performs well on these measures: the Bayes factor is maximized at most real objects, while returning only a moderate number of false positives. In comparison to a catalogue constructed by widely-used source detection software with manual post-processing by an astronomer, our method found a number of dim sources that were missing from the "ground truth" catalogue.

  19. ThermoData Engine (TDE): software implementation of the dynamic data evaluation concept. 9. Extensible thermodynamic constraints for pure compounds and new model developments.

    PubMed

    Diky, Vladimir; Chirico, Robert D; Muzny, Chris D; Kazakov, Andrei F; Kroenlein, Kenneth; Magee, Joseph W; Abdulagatov, Ilmutdin; Frenkel, Michael

    2013-12-23

    ThermoData Engine (TDE) is the first full-scale software implementation of the dynamic data evaluation concept, as reported in this journal. The present article describes the background and implementation for new additions in latest release of TDE. Advances are in the areas of program architecture and quality improvement for automatic property evaluations, particularly for pure compounds. It is shown that selection of appropriate program architecture supports improvement of the quality of the on-demand property evaluations through application of a readily extensible collection of constraints. The basis and implementation for other enhancements to TDE are described briefly. Other enhancements include the following: (1) implementation of model-validity enforcement for specific equations that can provide unphysical results if unconstrained, (2) newly refined group-contribution parameters for estimation of enthalpies of formation for pure compounds containing carbon, hydrogen, and oxygen, (3) implementation of an enhanced group-contribution method (NIST-Modified UNIFAC) in TDE for improved estimation of phase-equilibrium properties for binary mixtures, (4) tools for mutual validation of ideal-gas properties derived through statistical calculations and those derived independently through combination of experimental thermodynamic results, (5) improvements in program reliability and function that stem directly from the recent redesign of the TRC-SOURCE Data Archival System for experimental property values, and (6) implementation of the Peng-Robinson equation of state for binary mixtures, which allows for critical evaluation of mixtures involving supercritical components. Planned future developments are summarized.

  20. Optimization technique of wavefront coding system based on ZEMAX externally compiled programs

    NASA Astrophysics Data System (ADS)

    Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2016-10-01

    Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.

  1. Thermo-mechanical behavior and structure of melt blown shape-memory polyurethane nonwovens.

    PubMed

    Safranski, David L; Boothby, Jennifer M; Kelly, Cambre N; Beatty, Kyle; Lakhera, Nishant; Frick, Carl P; Lin, Angela; Guldberg, Robert E; Griffis, Jack C

    2016-09-01

    New processing methods for shape-memory polymers allow for tailoring material properties for numerous applications. Shape-memory nonwovens have been previously electrospun, but melt blow processing has yet to be evaluated. In order to determine the process parameters affecting shape-memory behavior, this study examined the effect of air pressure and collector speed on the mechanical behavior and shape-recovery of shape-memory polyurethane nonwovens. Mechanical behavior was measured by dynamic mechanical analysis and tensile testing, and shape-recovery was measured by unconstrained and constrained recovery. Microstructure changes throughout the shape-memory cycle were also investigated by micro-computed tomography. It was found that increasing collector speed increases elastic modulus, ultimate strength and recovery stress of the nonwoven, but collector speed does not affect the failure strain or unconstrained recovery. Increasing air pressure decreases the failure strain and increases rubbery modulus and unconstrained recovery, but air pressure does not influence recovery stress. It was also found that during the shape-memory cycle, the connectivity density of the fibers upon recovery does not fully return to the initial values, accounting for the incomplete shape-recovery seen in shape-memory nonwovens. With these parameter to property relationships identified, shape-memory nonwovens can be more easily manufactured and tailored for specific applications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  3. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments

    PubMed Central

    Mossel, Annette

    2015-01-01

    In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1) user tracking for virtual and augmented reality applications, (2) handheld target tracking for tunneling and (3) machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m. PMID:26694388

  4. A smart health monitoring chair for nonintrusive measurement of biological signals.

    PubMed

    Baek, Hyun Jae; Chung, Gih Sung; Kim, Ko Keun; Park, Kwang Suk

    2012-01-01

    We developed nonintrusive methods for simultaneous electrocardiogram, photoplethysmogram, and ballistocardiogram measurements that do not require direct contact between instruments and bare skin. These methods were applied to the design of a diagnostic chair for unconstrained heart rate and blood pressure monitoring purposes. Our methods were operationalized through capacitively coupled electrodes installed in the chair back that include high-input impedance amplifiers, and conductive textiles installed in the seat for capacitive driven-right-leg circuit configuration that is capable of recording electrocardiogram information through clothing. Photoplethysmograms were measured through clothing using seat mounted sensors with specially designed amplifier circuits that vary in light intensity according to clothing type. Ballistocardiograms were recorded using a film type transducer material, polyvinylidenefluoride (PVDF), which was installed beneath the seat cover. By simultaneously measuring signals, beat-to-beat heart rates could be monitored even when electrocardiograms were not recorded due to movement artifacts. Beat-to-beat blood pressure was also monitored using unconstrained measurements of pulse arrival time and other physiological parameters, and our experimental results indicated that the estimated blood pressure tended to coincide with actual blood pressure measurements. This study demonstrates the feasibility of our method and device for biological signal monitoring through clothing for unconstrained long-term daily health monitoring that does not require user awareness and is not limited by physical activity.

  5. Experimental verification of electrostatic boundary conditions in gate-patterned quantum devices

    NASA Astrophysics Data System (ADS)

    Hou, H.; Chung, Y.; Rughoobur, G.; Hsiao, T. K.; Nasir, A.; Flewitt, A. J.; Griffiths, J. P.; Farrer, I.; Ritchie, D. A.; Ford, C. J. B.

    2018-06-01

    In a model of a gate-patterned quantum device, it is important to choose the correct electrostatic boundary conditions (BCs) in order to match experiment. In this study, we model gated-patterned devices in doped and undoped GaAs heterostructures for a variety of BCs. The best match is obtained for an unconstrained surface between the gates, with a dielectric region above it and a frozen layer of surface charge, together with a very deep back boundary. Experimentally, we find a  ∼0.2 V offset in pinch-off characteristics of 1D channels in a doped heterostructure before and after etching off a ZnO overlayer, as predicted by the model. Also, we observe a clear quantised current driven by a surface acoustic wave through a lateral induced n-i-n junction in an undoped heterostructure. In the model, the ability to pump electrons in this type of device is highly sensitive to the back BC. Using the improved boundary conditions, it is straightforward to model quantum devices quite accurately using standard software.

  6. GlobiPack v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe

    2010-03-31

    GlobiPack contains a small collection of optimization globalization algorithms. These algorithms are used by optimization and various nonlinear equation solver algorithms.Used as the line-search procedure with Newton and Quasi-Newton optimization and nonlinear equation solver methods. These are standard published 1-D line search algorithms such as are described in the book Nocedal and Wright Numerical Optimization: 2nd edition, 2006. One set of algorithms were copied and refactored from the existing open-source Trilinos package MOOCHO where the linear search code is used to globalize SQP methods. This software is generic to any mathematical optimization problem where smooth derivatives exist. There is nomore » specific connection or mention whatsoever to any specific application, period. You cannot find more general mathematical software.« less

  7. The pseudo-Boolean optimization approach to form the N-version software structure

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.

    2015-10-01

    The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality. Some additional modifications of MVP have been made to solve the problem of N-version systems design. Those algorithms take into account the discovered specific features of the objective function. The practical experiments have shown the advantage of using these algorithm modifications because of reducing a search space.

  8. A Structural Health Monitoring Software Tool for Optimization, Diagnostics and Prognostics

    DTIC Science & Technology

    2011-01-01

    A Structural Health Monitoring Software Tool for Optimization, Diagnostics and Prognostics Seth S . Kessler1, Eric B. Flynn2, Christopher T...technology more accessible, and commercially practical. 1. INTRODUCTION Currently successful laboratory non- destructive testing and monitoring...PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES

  9. Lean and Efficient Software: Whole-Program Optimization of Executables

    DTIC Science & Technology

    2015-09-30

    libraries. Many levels of library interfaces—where some libraries are dynamically linked and some are provided in binary form only—significantly limit...software at build time. The opportunity: Our objective in this project is to substantially improve the performance, size, and robustness of binary ...executables by using static and dynamic binary program analysis techniques to perform whole-program optimization directly on compiled programs

  10. The Federal Aviation Administration Plan for Research, Engineering and Development, 1994

    DTIC Science & Technology

    1994-05-01

    Aeronautical Data Link Communications and (COTS) runway incursion system software will Applications, and 051-130 Airport Safety be demonstrated as a... airport departure and ar- efforts rival scheduling plans that optimize daily traffic flows for long-range flights between major city- * OTFP System to...Expanded HARS planning capabilities to in- aviation dispatchers to develop optimized high clude enhanced communications software for altitude flight

  11. Algorithm and Software for Calculating Optimal Regimes of the Process Water Supply System at the Kalininskaya NPP{sup 1}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murav’ev, V. P., E-mail: murval@mail.ru; Kochetkov, A. V.; Glazova, E. G.

    An algorithm and software for calculating the optimal operating regimes of the process water supply system at the Kalininskaya NPP are described. The parameters of the optimal regimes are determined for time varying meteorological conditions and condensation loads of the NPP. The optimal flow of the cooling water in the turbines is determined computationally; a regime map with the data on the optimal water consumption distribution between the coolers and displaying the regimes with an admissible heat load on the natural cooling lakes is composed. Optimizing the cooling system for a 4000-MW NPP will make it possible to conserve atmore » least 155,000 MW · h of electricity per year. The procedure developed can be used to optimize the process water supply systems of nuclear and thermal power plants.« less

  12. Computer Software Reviews.

    ERIC Educational Resources Information Center

    Hawaii State Dept. of Education, Honolulu. Office of Instructional Services.

    Intended to provide guidance in the selection of the best computer software available to support instruction and to make optimal use of schools' financial resources, this publication provides a listing of computer software programs that have been evaluated according to their currency, relevance, and value to Hawaii's educational programs. The…

  13. The Integrated Medical Model

    NASA Technical Reports Server (NTRS)

    Butler, Douglas J.; Kerstman, Eric

    2010-01-01

    This slide presentation reviews the goals and approach for the Integrated Medical Model (IMM). The IMM is a software decision support tool that forecasts medical events during spaceflight and optimizes medical systems during simulations. It includes information on the software capabilities, program stakeholders, use history, and the software logic.

  14. Multidisciplinary Optimization for Aerospace Using Genetic Optimization

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Hahn, Edward E.; Herrera, Claudia Y.

    2007-01-01

    In support of the ARMD guidelines NASA's Dryden Flight Research Center is developing a multidisciplinary design and optimization tool This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Optimization has made its way into many mainstream applications. For example NASTRAN(TradeMark) has its solution sequence 200 for Design Optimization, and MATLAB(TradeMark) has an Optimization Tool box. Other packages, such as ZAERO(TradeMark) aeroelastic panel code and the CFL3D(TradeMark) Navier-Stokes solver have no built in optimizer. The goal of the tool development is to generate a central executive capable of using disparate software packages ina cross platform network environment so as to quickly perform optimization and design tasks in a cohesive streamlined manner. A provided figure (Figure 1) shows a typical set of tools and their relation to the central executive. Optimization can take place within each individual too, or in a loop between the executive and the tool, or both.

  15. Reliability Analysis and Optimal Release Problem Considering Maintenance Time of Software Components for an Embedded OSS Porting Phase

    NASA Astrophysics Data System (ADS)

    Tamura, Yoshinobu; Yamada, Shigeru

    OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.

  16. Improving of the working process of axial compressors of gas turbine engines by using an optimization method

    NASA Astrophysics Data System (ADS)

    Marchukov, E.; Egorov, I.; Popov, G.; Baturin, O.; Goriachkin, E.; Novikova, Y.; Kolmakova, D.

    2017-08-01

    The article presents one optimization method for improving of the working process of an axial compressor of gas turbine engine. Developed method allows to perform search for the best geometry of compressor blades automatically by using optimization software IOSO and CFD software NUMECA Fine/Turbo. Optimization was performed by changing the form of the middle line in the three sections of each blade and shifts of three sections of the guide vanes in the circumferential and axial directions. The calculation of the compressor parameters was performed for work and stall point of its performance map on each optimization step. Study was carried out for seven-stage high-pressure compressor and three-stage low-pressure compressors. As a result of optimization, improvement of efficiency was achieved for all investigated compressors.

  17. Software for Analyzing Laminar-to-Turbulent Flow Transitions

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2004-01-01

    Software assurance is the planned and systematic set of activities that ensures that software processes and products conform to requirements, standards, and procedures. Examples of such activities are the following: code inspections, unit tests, design reviews, performance analyses, construction of traceability matrices, etc. In practice, software development projects have only limited resources (e.g., schedule, budget, and availability of personnel) to cover the entire development effort, of which assurance is but a part. Projects must therefore select judiciously from among the possible assurance activities. At its heart, this can be viewed as an optimization problem; namely, to determine the allocation of limited resources (time, money, and personnel) to minimize risk or, alternatively, to minimize the resources needed to reduce risk to an acceptable level. The end result of the work reported here is a means to optimize quality-assurance processes used in developing software. This is achieved by combining two prior programs in an innovative manner

  18. Software For Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steve E.

    1992-01-01

    SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.

  19. [VALUE OF SMART PHONE Scoliometer SOFTWARE IN OBTAINING OPTIMAL LUMBAR LORDOSIS DURING L4-S1 FUSION SURGERY].

    PubMed

    Yu, Weibo; Liang, De; Ye, Linqiang; Jiang, Xiaobing; Yao, Zhensong; Tang, Jingjing; Tang, Yongchao

    2015-10-01

    To investigate the value of smart phone Scoliometer software in obtaining optimal lumbar lordosis (LL) during L4-S1 fusion surgery. Between November 2014 and February 2015, 20 patients scheduled for L4-S1 fusion surgery were prospectively enrolled the study. There were 8 males and 12 females, aged 41-65 years (mean, 52.3 years). The disease duration ranged from 6 months to 6 years (mean, 3.4 years). Before operation, the pelvic incidence (PI) and Cobb angle of L4-S1 (CobbL4-S1) were measured on lateral X-ray film of lumbosacral spine by PACS system; and the ideal CobbL4-S1 was then calculated according to previously published methods [(PI+9 degrees) x 70%]. Subsequently, intraoperative CobbL4-S1 was monitored by the Scoliometer software and was defined as optimal while it was less than 5 degrees difference compared with ideal CobbL4-S1. Finally, the CobbL4-S1 was measured by the PACS system after operation and the consistency was compared between Scoliometer software and PACS system to evaluate the accuracy of this software. In addition, value of this method in obtaining optimal LL was validated by comparing the difference between ideal CobbL4-S1 and preoperative one with that between ideal CobbL4-S1 and postoperative one. The CobbL4-S1 was (36.17 ± 1.53)degrees for ideal one, (22.57 ± 5.50)degrees for preoperative one, (32.25 ± 1.46)degrees for intraoperative one measured by Scoliometer software, and (34.43 ± 1.72)degrees for postoperative one, respectively. The observed intraclass correlation coefficient (ICC) was excellent [ICC = 0.96, 95% confidence interval (0.93, 0.97)] and the mean absolute difference (MAD) was low (MAD = 1.23) between Scoliometer software and PACS system. The deviation between ideal CobbL4-S1 and postoperative CobbL4-S1 was (2.31 ± 0.23)degrees, which was significantly lower than the deviation between ideal CobbL4-S1 and preoperative CobbL4-S1 (13.60 ± 1.85)degrees (t = 6.065, P = 0.001). Scoliometer software can help surgeon obtain the optimal LL and deserve further dissemination.

  20. R&D 100, 2016: Pyomo 4.0 – Python Optimization Modeling Objects

    ScienceCinema

    Hart, William; Laird, Carl; Siirola, John

    2018-06-13

    Pyomo provides a rich software environment for formulating and analyzing optimization applications. Pyomo supports the algebraic specification of complex sets of objectives and constraints, which enables optimization solvers to exploit problem structure to efficiently perform optimization.

  1. Efficacy of a Newly Designed Cephalometric Analysis Software for McNamara Analysis in Comparison with Dolphin Software.

    PubMed

    Nouri, Mahtab; Hamidiaval, Shadi; Akbarzadeh Baghban, Alireza; Basafa, Mohammad; Fahim, Mohammad

    2015-01-01

    Cephalometric norms of McNamara analysis have been studied in various populations due to their optimal efficiency. Dolphin cephalometric software greatly enhances the conduction of this analysis for orthodontic measurements. However, Dolphin is very expensive and cannot be afforded by many clinicians in developing countries. A suitable alternative software program in Farsi/English will greatly help Farsi speaking clinicians. The present study aimed to develop an affordable Iranian cephalometric analysis software program and compare it with Dolphin, the standard software available on the market for cephalometric analysis. In this diagnostic, descriptive study, 150 lateral cephalograms of normal occlusion individuals were selected in Mashhad and Qazvin, two major cities of Iran mainly populated with Fars ethnicity, the main Iranian ethnic group. After tracing the cephalograms, the McNamara analysis standards were measured both with Dolphin and the new software. The cephalometric software was designed using Microsoft Visual C++ program in Windows XP. Measurements made with the new software were compared with those of Dolphin software on both series of cephalograms. The validity and reliability were tested using intra-class correlation coefficient. Calculations showed a very high correlation between the results of the Iranian cephalometric analysis software and Dolphin. This confirms the validity and optimal efficacy of the newly designed software (ICC 0.570-1.0). According to our results, the newly designed software has acceptable validity and reliability and can be used for orthodontic diagnosis, treatment planning and assessment of treatment outcome.

  2. Wind Farm Layout Optimization through a Crossover-Elitist Evolutionary Algorithm performed over a High Performing Analytical Wake Model

    NASA Astrophysics Data System (ADS)

    Kirchner-Bossi, Nicolas; Porté-Agel, Fernando

    2017-04-01

    Wind turbine wakes can significantly disrupt the performance of further downstream turbines in a wind farm, thus seriously limiting the overall wind farm power output. Such effect makes the layout design of a wind farm to play a crucial role on the whole performance of the project. An accurate definition of the wake interactions added to a computationally compromised layout optimization strategy can result in an efficient resource when addressing the problem. This work presents a novel soft-computing approach to optimize the wind farm layout by minimizing the overall wake effects that the installed turbines exert on one another. An evolutionary algorithm with an elitist sub-optimization crossover routine and an unconstrained (continuous) turbine positioning set up is developed and tested over an 80-turbine offshore wind farm over the North Sea off Denmark (Horns Rev I). Within every generation of the evolution, the wind power output (cost function) is computed through a recently developed and validated analytical wake model with a Gaussian profile velocity deficit [1], which has shown to outperform the traditionally employed wake models through different LES simulations and wind tunnel experiments. Two schemes with slightly different perimeter constraint conditions (full or partial) are tested. Results show, compared to the baseline, gridded layout, a wind power output increase between 5.5% and 7.7%. In addition, it is observed that the electric cable length at the facilities is reduced by up to 21%. [1] Bastankhah, Majid, and Fernando Porté-Agel. "A new analytical model for wind-turbine wakes." Renewable Energy 70 (2014): 116-123.

  3. Optimel: Software for selecting the optimal method

    NASA Astrophysics Data System (ADS)

    Popova, Olga; Popov, Boris; Romanov, Dmitry; Evseeva, Marina

    Optimel: software for selecting the optimal method automates the process of selecting a solution method from the optimization methods domain. Optimel features practical novelty. It saves time and money when conducting exploratory studies if its objective is to select the most appropriate method for solving an optimization problem. Optimel features theoretical novelty because for obtaining the domain a new method of knowledge structuring was used. In the Optimel domain, extended quantity of methods and their properties are used, which allows identifying the level of scientific studies, enhancing the user's expertise level, expand the prospects the user faces and opening up new research objectives. Optimel can be used both in scientific research institutes and in educational institutions.

  4. Unconstrained Capacities of Quantum Key Distribution and Entanglement Distillation for Pure-Loss Bosonic Broadcast Channels.

    PubMed

    Takeoka, Masahiro; Seshadreesan, Kaushik P; Wilde, Mark M

    2017-10-13

    We consider quantum key distribution (QKD) and entanglement distribution using a single-sender multiple-receiver pure-loss bosonic broadcast channel. We determine the unconstrained capacity region for the distillation of bipartite entanglement and secret key between the sender and each receiver, whenever they are allowed arbitrary public classical communication. A practical implication of our result is that the capacity region demonstrated drastically improves upon rates achievable using a naive time-sharing strategy, which has been employed in previously demonstrated network QKD systems. We show a simple example of a broadcast QKD protocol overcoming the limit of the point-to-point strategy. Our result is thus an important step toward opening a new framework of network channel-based quantum communication technology.

  5. Gaussian Accelerated Molecular Dynamics: Unconstrained Enhanced Sampling and Free Energy Calculation.

    PubMed

    Miao, Yinglong; Feher, Victoria A; McCammon, J Andrew

    2015-08-11

    A Gaussian accelerated molecular dynamics (GaMD) approach for simultaneous enhanced sampling and free energy calculation of biomolecules is presented. By constructing a boost potential that follows Gaussian distribution, accurate reweighting of the GaMD simulations is achieved using cumulant expansion to the second order. Here, GaMD is demonstrated on three biomolecular model systems: alanine dipeptide, chignolin folding, and ligand binding to the T4-lysozyme. Without the need to set predefined reaction coordinates, GaMD enables unconstrained enhanced sampling of these biomolecules. Furthermore, the free energy profiles obtained from reweighting of the GaMD simulations allow us to identify distinct low-energy states of the biomolecules and characterize the protein-folding and ligand-binding pathways quantitatively.

  6. Gaussian Accelerated Molecular Dynamics: Unconstrained Enhanced Sampling and Free Energy Calculation

    PubMed Central

    2016-01-01

    A Gaussian accelerated molecular dynamics (GaMD) approach for simultaneous enhanced sampling and free energy calculation of biomolecules is presented. By constructing a boost potential that follows Gaussian distribution, accurate reweighting of the GaMD simulations is achieved using cumulant expansion to the second order. Here, GaMD is demonstrated on three biomolecular model systems: alanine dipeptide, chignolin folding, and ligand binding to the T4-lysozyme. Without the need to set predefined reaction coordinates, GaMD enables unconstrained enhanced sampling of these biomolecules. Furthermore, the free energy profiles obtained from reweighting of the GaMD simulations allow us to identify distinct low-energy states of the biomolecules and characterize the protein-folding and ligand-binding pathways quantitatively. PMID:26300708

  7. Computational Methods for Identification, Optimization and Control of PDE Systems

    DTIC Science & Technology

    2010-04-30

    focused on the development of numerical methods and software specifically for the purpose of solving control, design, and optimization prob- lems where...that provide the foundations of simulation software must play an important role in any research of this type, the demands placed on numerical methods...y sus Aplicaciones , Ciudad de Cor- doba - Argentina, October 2007. 3. Inverse Problems in Deployable Space Structures, Fourth Conference on Inverse

  8. The Third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The third Air Force/NASA Symposium on Recent Advances in Multidisciplinary Analysis and Optimization was held on 24-26 Sept. 1990. Sessions were on the following topics: dynamics and controls; multilevel optimization; sensitivity analysis; aerodynamic design software systems; optimization theory; analysis and design; shape optimization; vehicle components; structural optimization; aeroelasticity; artificial intelligence; multidisciplinary optimization; and composites.

  9. Dataflow Design Tool: User's Manual

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1996-01-01

    The Dataflow Design Tool is a software tool for selecting a multiprocessor scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. The software tool implements graph-search algorithms and analysis techniques based on the dataflow paradigm. Dataflow analyses provided by the software are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool provides performance optimization through the inclusion of artificial precedence constraints among the schedulable tasks. The user interface and tool capabilities are described. Examples are provided to demonstrate the analysis, scheduling, and optimization functions facilitated by the tool.

  10. 76 FR 54800 - International Business Machines (IBM), Software Group Business Unit, Quality Assurance Group, San...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-02

    ... Machines (IBM), Software Group Business Unit, Quality Assurance Group, San Jose, California; Notice of... workers of International Business Machines (IBM), Software Group Business Unit, Optim Data Studio Tools QA... February 2, 2011 (76 FR 5832). The subject worker group supplies acceptance testing services, design...

  11. WAMA: a method of optimizing reticle/die placement to increase litho cell productivity

    NASA Astrophysics Data System (ADS)

    Dor, Amos; Schwarz, Yoram

    2005-05-01

    This paper focuses on reticle/field placement methodology issues, the disadvantages of typical methods used in the industry, and the innovative way that the WAMA software solution achieves optimized placement. Typical wafer placement methodologies used in the semiconductor industry considers a very limited number of parameters, like placing the maximum amount of die on the wafer circle and manually modifying die placement to minimize edge yield degradation. This paper describes how WAMA software takes into account process characteristics, manufacturing constraints and business objectives to optimize placement for maximum stepper productivity and maximum good die (yield) on the wafer.

  12. Hubble Systems Optimize Hospital Schedules

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Don Rosenthal, a former Ames Research Center computer scientist who helped design the Hubble Space Telescope's scheduling software, co-founded Allocade Inc. of Menlo Park, California, in 2004. Allocade's OnCue software helps hospitals reclaim unused capacity and optimize constantly changing schedules for imaging procedures. After starting to use the software, one medical center soon reported noticeable improvements in efficiency, including a 12 percent increase in procedure volume, 35 percent reduction in staff overtime, and significant reductions in backlog and technician phone time. Allocade now offers versions for outpatient and inpatient magnetic resonance imaging (MRI), ultrasound, interventional radiology, nuclear medicine, Positron Emission Tomography (PET), radiography, radiography-fluoroscopy, and mammography.

  13. Home | BEopt

    Science.gov Websites

    BEopt - Building Energy Optimization BEopt NREL - National Renewable Energy Laboratory Primary Energy Optimization) software provides capabilities to evaluate residential building designs and identify sequential search optimization technique used by BEopt: Finds minimum-cost building designs at different

  14. Optimization of the coherence function estimation for multi-core central processing unit

    NASA Astrophysics Data System (ADS)

    Cheremnov, A. G.; Faerman, V. A.; Avramchuk, V. S.

    2017-02-01

    The paper considers use of parallel processing on multi-core central processing unit for optimization of the coherence function evaluation arising in digital signal processing. Coherence function along with other methods of spectral analysis is commonly used for vibration diagnosis of rotating machinery and its particular nodes. An algorithm is given for the function evaluation for signals represented with digital samples. The algorithm is analyzed for its software implementation and computational problems. Optimization measures are described, including algorithmic, architecture and compiler optimization, their results are assessed for multi-core processors from different manufacturers. Thus, speeding-up of the parallel execution with respect to sequential execution was studied and results are presented for Intel Core i7-4720HQ и AMD FX-9590 processors. The results show comparatively high efficiency of the optimization measures taken. In particular, acceleration indicators and average CPU utilization have been significantly improved, showing high degree of parallelism of the constructed calculating functions. The developed software underwent state registration and will be used as a part of a software and hardware solution for rotating machinery fault diagnosis and pipeline leak location with acoustic correlation method.

  15. Automatic Parameter Tuning for the Morpheus Vehicle Using Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Birge, B.

    2013-01-01

    A high fidelity simulation using a PC based Trick framework has been developed for Johnson Space Center's Morpheus test bed flight vehicle. There is an iterative development loop of refining and testing the hardware, refining the software, comparing the software simulation to hardware performance and adjusting either or both the hardware and the simulation to extract the best performance from the hardware as well as the most realistic representation of the hardware from the software. A Particle Swarm Optimization (PSO) based technique has been developed that increases speed and accuracy of the iterative development cycle. Parameters in software can be automatically tuned to make the simulation match real world subsystem data from test flights. Special considerations for scale, linearity, discontinuities, can be all but ignored with this technique, allowing fast turnaround both for simulation tune up to match hardware changes as well as during the test and validation phase to help identify hardware issues. Software models with insufficient control authority to match hardware test data can be immediately identified and using this technique requires very little to no specialized knowledge of optimization, freeing model developers to concentrate on spacecraft engineering. Integration of the PSO into the Morpheus development cycle will be discussed as well as a case study highlighting the tool's effectiveness.

  16. Optimization of Laminated Composite Plates

    DTIC Science & Technology

    1989-09-01

    plane loads has already been studied, and a number of technical publications and software packages can be found. In the present report, an optimization of...described above. There is no difficulty in any case, and comercial software , from personal computers to macro- systems, is available. In the chapter...Reforzado y su Aplicacion a los Medios de Transporte", Ph.D. University of Zaragoza, Spain, 1984. 77. Miravete A., "Caracterisation et mise au Point d’un

  17. Methods for Large-Scale Nonlinear Optimization.

    DTIC Science & Technology

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  18. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  19. Infrared and visible fusion face recognition based on NSCT domain

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-01-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.

  20. Structural Performance’s Optimally Analysing and Implementing Based on ANSYS Technology

    NASA Astrophysics Data System (ADS)

    Han, Na; Wang, Xuquan; Yue, Haifang; Sun, Jiandong; Wu, Yongchun

    2017-06-01

    Computer-aided Engineering (CAE) is a hotspot both in academic field and in modern engineering practice. Analysis System(ANSYS) simulation software for its excellent performance become outstanding one in CAE family, it is committed to the innovation of engineering simulation to help users to shorten the design process, improve product innovation and performance. Aimed to explore a structural performance’s optimally analyzing model for engineering enterprises, this paper introduced CAE and its development, analyzed the necessity for structural optimal analysis as well as the framework of structural optimal analysis on ANSYS Technology, used ANSYS to implement a reinforced concrete slab structural performance’s optimal analysis, which was display the chart of displacement vector and the chart of stress intensity. Finally, this paper compared ANSYS software simulation results with the measured results,expounded that ANSYS is indispensable engineering calculation tools.

  1. Reconfigurable Software for Controlling Formation Flying

    NASA Technical Reports Server (NTRS)

    Mueller, Joseph B.

    2006-01-01

    Software for a system to control the trajectories of multiple spacecraft flying in formation is being developed to reflect underlying concepts of (1) a decentralized approach to guidance and control and (2) reconfigurability of the control system, including reconfigurability of the software and of control laws. The software is organized as a modular network of software tasks. The computational load for both determining relative trajectories and planning maneuvers is shared equally among all spacecraft in a cluster. The flexibility and robustness of the software are apparent in the fact that tasks can be added, removed, or replaced during flight. In a computational simulation of a representative formation-flying scenario, it was demonstrated that the following are among the services performed by the software: Uploading of commands from a ground station and distribution of the commands among the spacecraft, Autonomous initiation and reconfiguration of formations, Autonomous formation of teams through negotiations among the spacecraft, Working out details of high-level commands (e.g., shapes and sizes of geometrically complex formations), Implementation of a distributed guidance law providing autonomous optimization and assignment of target states, and Implementation of a decentralized, fuel-optimal, impulsive control law for planning maneuvers.

  2. Optimal Software Strategies in the Presence of Network Externalities

    ERIC Educational Resources Information Center

    Liu, Yipeng

    2009-01-01

    Network externalities or alternatively termed network effects are pervasive in computer software markets. While software vendors consider pricing strategies, they must also take into account the impact of network externalities on their sales. My main interest in this research is to describe a firm's strategies and behaviors in the presence of…

  3. Sulfamethoxazole in poultry wastewater: Identification, treatability and degradation pathway determination in a membrane-photocatalytic slurry reactor.

    PubMed

    Asha, Raju C; Kumar, Mathava

    2015-01-01

    The presence of sulfamethoxazole (SMX) in a real-time poultry wastewater was identified via HPLC analysis. Subsequently, SMX removal from the poultry wastewater was investigated using a continuous-mode membrane-photocatalytic slurry reactor (MPSR). The real-time poultry wastewater was found to have an SMX concentration of 0-2.3 mg L(-1). A granular activated carbon supported TiO2 (GAC-TiO2) was synthesized, characterized and used in MPSR experiments. The optimal MPSR condition, i.e., HRT ∼ 125 min and catalyst dosage 529.3 mg L(-1), for complete SMX removal was found out using unconstrained optimization technique. Under the optimized condition, the effect of SMX concentration on MPSR performance was investigated by synthetic addition of SMX (i.e., 1, 25, 50, 75 and 100 mg L(-1)) into the wastewater. Interestingly, complete removals of total volatile solids (TVS), biochemical oxygen demand (BOD) and SMX were observed under all SMX concentrations investigated. However, a decline in SMX removal rate and proportionate increase in transmembrane-pressure (TMP) were observed when the SMX concentration was increased to higher levels. In the MPSR, the SMX mineralization was through one of the following degradation pathways: (i) fragmentation of the isoxazole ring and (ii) the elimination of methyl and amide moieties followed by the formation of phenyl sulfinate ion. These results show that the continuous-mode MPSR has great potential in the removal for SMX contaminated real-time poultry wastewater and similar organic micropollutants from wastewater.

  4. Scope of Gradient and Genetic Algorithms in Multivariable Function Optimization

    NASA Technical Reports Server (NTRS)

    Shaykhian, Gholam Ali; Sen, S. K.

    2007-01-01

    Global optimization of a multivariable function - constrained by bounds specified on each variable and also unconstrained - is an important problem with several real world applications. Deterministic methods such as the gradient algorithms as well as the randomized methods such as the genetic algorithms may be employed to solve these problems. In fact, there are optimization problems where a genetic algorithm/an evolutionary approach is preferable at least from the quality (accuracy) of the results point of view. From cost (complexity) point of view, both gradient and genetic approaches are usually polynomial-time; there are no serious differences in this regard, i.e., the computational complexity point of view. However, for certain types of problems, such as those with unacceptably erroneous numerical partial derivatives and those with physically amplified analytical partial derivatives whose numerical evaluation involves undesirable errors and/or is messy, a genetic (stochastic) approach should be a better choice. We have presented here the pros and cons of both the approaches so that the concerned reader/user can decide which approach is most suited for the problem at hand. Also for the function which is known in a tabular form, instead of an analytical form, as is often the case in an experimental environment, we attempt to provide an insight into the approaches focusing our attention toward accuracy. Such an insight will help one to decide which method, out of several available methods, should be employed to obtain the best (least error) output. *

  5. LLIMAS: Revolutionizing integrating modeling and analysis at MIT Lincoln Laboratory

    NASA Astrophysics Data System (ADS)

    Doyle, Keith B.; Stoeckel, Gerhard P.; Rey, Justin J.; Bury, Mark E.

    2017-08-01

    MIT Lincoln Laboratory's Integrated Modeling and Analysis Software (LLIMAS) enables the development of novel engineering solutions for advanced prototype systems through unique insights into engineering performance and interdisciplinary behavior to meet challenging size, weight, power, environmental, and performance requirements. LLIMAS is a multidisciplinary design optimization tool that wraps numerical optimization algorithms around an integrated framework of structural, thermal, optical, stray light, and computational fluid dynamics analysis capabilities. LLIMAS software is highly extensible and has developed organically across a variety of technologies including laser communications, directed energy, photometric detectors, chemical sensing, laser radar, and imaging systems. The custom software architecture leverages the capabilities of existing industry standard commercial software and supports the incorporation of internally developed tools. Recent advances in LLIMAS's Structural-Thermal-Optical Performance (STOP), aeromechanical, and aero-optical capabilities as applied to Lincoln prototypes are presented.

  6. Constraints on geomagnetic secular variation modeling from electromagnetism and fluid dynamics of the Earth's core

    NASA Technical Reports Server (NTRS)

    Benton, E. R.

    1986-01-01

    A spherical harmonic representation of the geomagnetic field and its secular variation for epoch 1980, designated GSFC(9/84), is derived and evaluated. At three epochs (1977.5, 1980.0, 1982.5) this model incorporates conservation of magnetic flux through five selected patches of area on the core/mantle boundary bounded by the zero contours of vertical magnetic field. These fifteen nonlinear constraints are included like data in an iterative least squares parameter estimation procedure that starts with the recently derived unconstrained field model GSFC (12/83). Convergence is approached within three iterations. The constrained model is evaluated by comparing its predictive capability outside the time span of its data, in terms of residuals at magnetic observatories, with that for the unconstrained model.

  7. Automated acquisition system for routine, noninvasive monitoring of physiological data.

    PubMed

    Ogawa, M; Tamura, T; Togawa, T

    1998-01-01

    A fully automated, noninvasive data-acquisition system was developed to permit long-term measurement of physiological functions at home, without disturbing subjects' normal routines. The system consists of unconstrained monitors built into furnishings and structures in a home environment. An electrocardiographic (ECG) monitor in the bathtub measures heart function during bathing, a temperature monitor in the bed measures body temperature, and a weight monitor built into the toilet serves as a scale to record weight. All three monitors are connected to one computer and function with data-acquisition programs and a data format rule. The unconstrained physiological parameter monitors and fully automated measurement procedures collect data noninvasively without the subject's awareness. The system was tested for 1 week by a healthy male subject, aged 28, in laboratory-based facilities.

  8. 3D modeling of unconstrained HPT process: role of strain gradient on high deformed microstructure formation

    NASA Astrophysics Data System (ADS)

    Ben Kaabar, A.; Aoufi, A.; Descartes, S.; Desrayaud, C.

    2017-05-01

    During tribological contact’s life, different deformation paths lead to the formation of high deformed microstructure, in the near-surface layers of the bodies. The mechanical conditions (high pressure, shear) occurring under contact, are reproduced through unconstrained High Pressure Torsion configuration. A 3D finite element model of this HPT test is developed to study the local deformation history leading to high deformed microstructure with nominal pressure and friction coefficient. For the present numerical study the friction coefficient at the interface sample/anvils is kept constant at 0.3; the material used is high purity iron. The strain distribution in the sample bulk, as well as the main components of the strain gradients according to the spatial coordinates are investigated, with rotation angle of the anvil.

  9. Support Vector Machine algorithm for regression and classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Chenggang; Zavaljevski, Nela

    2001-08-01

    The software is an implementation of the Support Vector Machine (SVM) algorithm that was invented and developed by Vladimir Vapnik and his co-workers at AT&T Bell Laboratories. The specific implementation reported here is an Active Set method for solving a quadratic optimization problem that forms the major part of any SVM program. The implementation is tuned to specific constraints generated in the SVM learning. Thus, it is more efficient than general-purpose quadratic optimization programs. A decomposition method has been implemented in the software that enables processing large data sets. The size of the learning data is virtually unlimited by themore » capacity of the computer physical memory. The software is flexible and extensible. Two upper bounds are implemented to regulate the SVM learning for classification, which allow users to adjust the false positive and false negative rates. The software can be used either as a standalone, general-purpose SVM regression or classification program, or be embedded into a larger software system.« less

  10. Volumetric depth peeling for medical image display

    NASA Astrophysics Data System (ADS)

    Borland, David; Clarke, John P.; Fielding, Julia R.; TaylorII, Russell M.

    2006-01-01

    Volumetric depth peeling (VDP) is an extension to volume rendering that enables display of otherwise occluded features in volume data sets. VDP decouples occlusion calculation from the volume rendering transfer function, enabling independent optimization of settings for rendering and occlusion. The algorithm is flexible enough to handle multiple regions occluding the object of interest, as well as object self-occlusion, and requires no pre-segmentation of the data set. VDP was developed as an improvement for virtual arthroscopy for the diagnosis of shoulder-joint trauma, and has been generalized for use in other simple and complex joints, and to enable non-invasive urology studies. In virtual arthroscopy, the surfaces in the joints often occlude each other, allowing limited viewpoints from which to evaluate these surfaces. In urology studies, the physician would like to position the virtual camera outside the kidney collecting system and see inside it. By rendering invisible all voxels between the observer's point of view and objects of interest, VDP enables viewing from unconstrained positions. In essence, VDP can be viewed as a technique for automatically defining an optimal data- and task-dependent clipping surface. Radiologists using VDP display have been able to perform evaluations of pathologies more easily and more rapidly than with clinical arthroscopy, standard volume rendering, or standard MRI/CT slice viewing.

  11. Wavefront Control Toolbox for James Webb Space Telescope Testbed

    NASA Technical Reports Server (NTRS)

    Shiri, Ron; Aronstein, David L.; Smith, Jeffery Scott; Dean, Bruce H.; Sabatke, Erin

    2007-01-01

    We have developed a Matlab toolbox for wavefront control of optical systems. We have applied this toolbox to the optical models of James Webb Space Telescope (JWST) in general and to the JWST Testbed Telescope (TBT) in particular, implementing both unconstrained and constrained wavefront optimization to correct for possible misalignments present on the segmented primary mirror or the monolithic secondary mirror. The optical models implemented in Zemax optical design program and information is exchanged between Matlab and Zemax via the Dynamic Data Exchange (DDE) interface. The model configuration is managed using the XML protocol. The optimization algorithm uses influence functions for each adjustable degree of freedom of the optical mode. The iterative and non-iterative algorithms have been developed to converge to a local minimum of the root-mean-square (rms) of wavefront error using singular value decomposition technique of the control matrix of influence functions. The toolkit is highly modular and allows the user to choose control strategies for the degrees of freedom to be adjusted on a given iteration and wavefront convergence criterion. As the influence functions are nonlinear over the control parameter space, the toolkit also allows for trade-offs between frequency of updating the local influence functions and execution speed. The functionality of the toolbox and the validity of the underlying algorithms have been verified through extensive simulations.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delcamp, E.; Lagarde, B.; Polack, F.

    Though optimization softwares are commonly used in visible optical design, none seems to exist for soft X-ray optics. It is shown here that optimization techniques can be applied with some advantages to X-UV monochromator design. A merit function, suitable for minimizing the aberrations is proposed, and the general method of computation is described. Samples of the software inputs and outputs are presented, and compared to reference data. As an example of application to soft X-ray monochromator design, the optimization of the soft X-ray monochromator of the ESRF microscopy beamline is presented. Good agreement between the predicted resolution of a modifiedmore » PGM monochromator and experimental measurements is reported.« less

  13. New Software Developments for Quality Mesh Generation and Optimization from Biomedical Imaging Data

    PubMed Central

    Yu, Zeyun; Wang, Jun; Gao, Zhanheng; Xu, Ming; Hoshijima, Masahiko

    2013-01-01

    In this paper we present a new software toolkit for generating and optimizing surface and volumetric meshes from three-dimensional (3D) biomedical imaging data, targeted at image-based finite element analysis of some biomedical activities in a single material domain. Our toolkit includes a series of geometric processing algorithms including surface re-meshing and quality-guaranteed tetrahedral mesh generation and optimization. All methods described have been encapsulated into a user-friendly graphical interface for easy manipulation and informative visualization of biomedical images and mesh models. Numerous examples are presented to demonstrate the effectiveness and efficiency of the described methods and toolkit. PMID:24252469

  14. A hybrid modeling system designed to support decision making in the optimization of extrusion of inhomogeneous materials

    NASA Astrophysics Data System (ADS)

    Kryuchkov, D. I.; Zalazinsky, A. G.

    2017-12-01

    Mathematical models and a hybrid modeling system are developed for the implementation of the experimental-calculation method for the engineering analysis and optimization of the plastic deformation of inhomogeneous materials with the purpose of improving metal-forming processes and machines. The created software solution integrates Abaqus/CAE, a subroutine for mathematical data processing, with the use of Python libraries and the knowledge base. Practical application of the software solution is exemplified by modeling the process of extrusion of a bimetallic billet. The results of the engineering analysis and optimization of the extrusion process are shown, the material damage being monitored.

  15. Automatic generation of randomized trial sequences for priming experiments.

    PubMed

    Ihrke, Matthias; Behrendt, Jörg

    2011-01-01

    In most psychological experiments, a randomized presentation of successive displays is crucial for the validity of the results. For some paradigms, this is not a trivial issue because trials are interdependent, e.g., priming paradigms. We present a software that automatically generates optimized trial sequences for (negative-) priming experiments. Our implementation is based on an optimization heuristic known as genetic algorithms that allows for an intuitive interpretation due to its similarity to natural evolution. The program features a graphical user interface that allows the user to generate trial sequences and to interactively improve them. The software is based on freely available software and is released under the GNU General Public License.

  16. Incorporating cost-benefit analyses into software assurance planning

    NASA Technical Reports Server (NTRS)

    Feather, M. S.; Sigal, B.; Cornford, S. L.; Hutchinson, P.

    2001-01-01

    The objective is to use cost-benefit analyses to identify, for a given project, optimal sets of software assurance activities. Towards this end we have incorporated cost-benefit calculations into a risk management framework.

  17. SOA Governance: A Critical SOA Success Factor

    DTIC Science & Technology

    2010-04-01

    Software Perspective Service Consumer Service Providers Interface Optimize tomorrow today. ® Building Blocks...of a SOA Service – Software implemented capability that is well-defined, self contained and does not depend on context or state of other services ... Service Consumer – Service , application or other software component that requires a specific service . – Located through registry – Initiates service

  18. CrossTalk. The Journal of Defense Software Engineering. Volume 13, Number 6, June 2000

    DTIC Science & Technology

    2000-06-01

    Techniques for Efficiently Generating and Testing Software This paper presents a proven process that uses advanced tools to design, develop and test... optimal software. by Keith R. Wegner Large Software Systems—Back to Basics Development methods that work on small problems seem to not scale well to...Ability Requirements for Teamwork: Implications for Human Resource Management, Journal of Management, Vol. 20, No. 2, 1994. 11. Ferguson, Pat, Watts S

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Kuo -Ling; Mehrotra, Sanjay

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  20. A benchmarking tool to evaluate computer tomography perfusion infarct core predictions against a DWI standard.

    PubMed

    Cereda, Carlo W; Christensen, Søren; Campbell, Bruce Cv; Mishra, Nishant K; Mlynash, Michael; Levi, Christopher; Straka, Matus; Wintermark, Max; Bammer, Roland; Albers, Gregory W; Parsons, Mark W; Lansberg, Maarten G

    2016-10-01

    Differences in research methodology have hampered the optimization of Computer Tomography Perfusion (CTP) for identification of the ischemic core. We aim to optimize CTP core identification using a novel benchmarking tool. The benchmarking tool consists of an imaging library and a statistical analysis algorithm to evaluate the performance of CTP. The tool was used to optimize and evaluate an in-house developed CTP-software algorithm. Imaging data of 103 acute stroke patients were included in the benchmarking tool. Median time from stroke onset to CT was 185 min (IQR 180-238), and the median time between completion of CT and start of MRI was 36 min (IQR 25-79). Volumetric accuracy of the CTP-ROIs was optimal at an rCBF threshold of <38%; at this threshold, the mean difference was 0.3 ml (SD 19.8 ml), the mean absolute difference was 14.3 (SD 13.7) ml, and CTP was 67% sensitive and 87% specific for identification of DWI positive tissue voxels. The benchmarking tool can play an important role in optimizing CTP software as it provides investigators with a novel method to directly compare the performance of alternative CTP software packages. © The Author(s) 2015.

  1. Economic optimization software applied to JFK airport heating and cooling plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gay, R.R.; McCoy, L.

    This paper describes the on-line economic optimization routine developed by Enter Software, Inc. for application at the heating and cooling plant for the JFK International Airport near New York City. The objective of the economic optimization is to find the optimum plant configuration (which gas turbines to run, power levels of each gas turbine, duct firing levels, which auxiliary water heaters to run, which electric chillers to run, and which absorption chillers to run) which produces maximum net income at the plant as plant loads and the prices vary. The routines also include a planner which runs a series ofmore » optimizations over multiple plant configurations to simulate the varying plant operating conditions for the purpose of predicting the overall plant results over a period of time.« less

  2. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    PubMed

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  3. A new modified conjugate gradient coefficient for solving system of linear equations

    NASA Astrophysics Data System (ADS)

    Hajar, N.; ‘Aini, N.; Shapiee, N.; Abidin, Z. Z.; Khadijah, W.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is an evolution of computational method in solving unconstrained optimization problems. This approach is easy to implement due to its simplicity and has been proven to be effective in solving real-life application. Although this field has received copious amount of attentions in recent years, some of the new approaches of CG algorithm cannot surpass the efficiency of the previous versions. Therefore, in this paper, a new CG coefficient which retains the sufficient descent and global convergence properties of the original CG methods is proposed. This new CG is tested on a set of test functions under exact line search. Its performance is then compared to that of some of the well-known previous CG methods based on number of iterations and CPU time. The results show that the new CG algorithm has the best efficiency amongst all the methods tested. This paper also includes an application of the new CG algorithm for solving large system of linear equations

  4. Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.

    PubMed

    Chen, C W; Chen, D Z

    2001-11-01

    Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.

  5. Adiabatic Quantum Computation with Neutral Atoms

    NASA Astrophysics Data System (ADS)

    Biedermann, Grant

    2013-03-01

    We are implementing a new platform for adiabatic quantum computation (AQC)[2] based on trapped neutral atoms whose coupling is mediated by the dipole-dipole interactions of Rydberg states. Ground state cesium atoms are dressed by laser fields in a manner conditional on the Rydberg blockade mechanism,[3,4] thereby providing the requisite entangling interactions. As a benchmark we study a Quadratic Unconstrained Binary Optimization (QUBO) problem whose solution is found in the ground state spin configuration of an Ising-like model. In collaboration with Lambert Parazzoli, Sandia National Laboratories; Aaron Hankin, Center for Quantum Information and Control (CQuIC), University of New Mexico; James Chin-Wen Chou, Yuan-Yu Jau, Peter Schwindt, Cort Johnson, and George Burns, Sandia National Laboratories; Tyler Keating, Krittika Goyal, and Ivan Deutsch, Center for Quantum Information and Control (CQuIC), University of New Mexico; and Andrew Landahl, Sandia National Laboratories. This work was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu; Phanish, Deepa

    We present an Augmented Lagrangian formulation and its real-space implementation for non-periodic Orbital-Free Density Functional Theory (OF-DFT) calculations. In particular, we rewrite the constrained minimization problem of OF-DFT as a sequence of minimization problems without any constraint, thereby making it amenable to powerful unconstrained optimization algorithms. Further, we develop a parallel implementation of this approach for the Thomas–Fermi–von Weizsacker (TFW) kinetic energy functional in the framework of higher-order finite-differences and the conjugate gradient method. With this implementation, we establish that the Augmented Lagrangian approach is highly competitive compared to the penalty and Lagrange multiplier methods. Additionally, we show that higher-ordermore » finite-differences represent a computationally efficient discretization for performing OF-DFT simulations. Overall, we demonstrate that the proposed formulation and implementation are both efficient and robust by studying selected examples, including systems consisting of thousands of atoms. We validate the accuracy of the computed energies and forces by comparing them with those obtained by existing plane-wave methods.« less

  7. Quasi-Newton parallel geometry optimization methods

    NASA Astrophysics Data System (ADS)

    Burger, Steven K.; Ayers, Paul W.

    2010-07-01

    Algorithms for parallel unconstrained minimization of molecular systems are examined. The overall framework of minimization is the same except for the choice of directions for updating the quasi-Newton Hessian. Ideally these directions are chosen so the updated Hessian gives steps that are same as using the Newton method. Three approaches to determine the directions for updating are presented: the straightforward approach of simply cycling through the Cartesian unit vectors (finite difference), a concurrent set of minimizations, and the Lanczos method. We show the importance of using preconditioning and a multiple secant update in these approaches. For the Lanczos algorithm, an initial set of directions is required to start the method, and a number of possibilities are explored. To test the methods we used the standard 50-dimensional analytic Rosenbrock function. Results are also reported for the histidine dipeptide, the isoleucine tripeptide, and cyclic adenosine monophosphate. All of these systems show a significant speed-up with the number of processors up to about eight processors.

  8. Nonlinear aeroservoelastic analysis of a controlled multiple-actuated-wing model with free-play

    NASA Astrophysics Data System (ADS)

    Huang, Rui; Hu, Haiyan; Zhao, Yonghui

    2013-10-01

    In this paper, the effects of structural nonlinearity due to free-play in both leading-edge and trailing-edge outboard control surfaces on the linear flutter control system are analyzed for an aeroelastic model of three-dimensional multiple-actuated-wing. The free-play nonlinearities in the control surfaces are modeled theoretically by using the fictitious mass approach. The nonlinear aeroelastic equations of the presented model can be divided into nine sub-linear modal-based aeroelastic equations according to the different combinations of deflections of the leading-edge and trailing-edge outboard control surfaces. The nonlinear aeroelastic responses can be computed based on these sub-linear aeroelastic systems. To demonstrate the effects of nonlinearity on the linear flutter control system, a single-input and single-output controller and a multi-input and multi-output controller are designed based on the unconstrained optimization techniques. The numerical results indicate that the free-play nonlinearity can lead to either limit cycle oscillations or divergent motions when the linear control system is implemented.

  9. Evaluation of structural and thermophysical effects on the measurement accuracy of deep body thermometers based on dual-heat-flux method.

    PubMed

    Huang, Ming; Tamura, Toshiyo; Chen, Wenxi; Kanaya, Shigehiko

    2015-01-01

    To help pave a path toward the practical use of continuous unconstrained noninvasive deep body temperature measurement, this study aims to evaluate the structural and thermophysical effects on measurement accuracy for the dual-heat-flux method (DHFM). By considering the thermometer's height, radius, conductivity, density and specific heat as variables affecting the accuracy of DHFM measurement, we investigated the relationship between those variables and accuracy using 3-D models based on finite element method. The results of our simulation study show that accuracy is proportional to the radius but inversely proportional to the thickness of the thermometer when the radius is less than 30.0mm, and is also inversely proportional to the heat conductivity of the heat insulator inside the thermometer. The insights from this study would help to build a guideline for design, fabrication and optimization of DHFM-based thermometers, as well as their practical use. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Microbend fiber-optic temperature sensor

    DOEpatents

    Weiss, J.D.

    1995-05-30

    A temperature sensor is made of optical fiber into which quasi-sinusoidal microbends have been permanently introduced. In particular, the present invention includes a graded-index optical fiber directing steady light through a section of the optical fiber containing a plurality of permanent microbends. The microbend section of the optical fiber is contained in a thermally expansive sheath, attached to a thermally expansive structure, or attached to a bimetallic element undergoing temperature changes and being monitored. The microbend section is secured to the thermally expansive sheath which allows the amplitude of the microbends to decrease with temperature. The resultant increase in the optical fiber`s transmission thus allows temperature to be measured. The plural microbend section of the optical fiber is secured to the thermally expansive structure only at its ends and the microbends themselves are completely unconstrained laterally by any bonding agent to obtain maximum longitudinal temperature sensitivity. Although the permanent microbends reduce the transmission capabilities of fiber optics, the present invention utilizes this phenomenon as a transduction mechanism which is optimized to measure temperature. 5 figs.

  11. Microbend fiber-optic temperature sensor

    DOEpatents

    Weiss, Jonathan D.

    1995-01-01

    A temperature sensor is made of optical fiber into which quasi-sinusoidal microbends have been permanently introduced. In particular, the present invention includes a graded-index optical fiber directing steady light through a section of the optical fiber containing a plurality of permanent microbends. The microbend section of the optical fiber is contained in a thermally expansive sheath, attached to a thermally expansive structure, or attached to a bimetallic element undergoing temperature changes and being monitored. The microbend section is secured to the thermally expansive sheath which allows the amplitude of the microbends to decrease with temperature. The resultant increase in the optical fiber's transmission thus allows temperature to be measured. The plural microbend section of the optical fiber is secured to the thermally expansive structure only at its ends and the microbends themselves are completely unconstrained laterally by any bonding agent to obtain maximum longitudinal temperature sensitivity. Although the permanent microbends reduce the transmission capabilities of fiber optics, the present invention utilizes this phenomenon as a transduction mechanism which is optimized to measure temperature.

  12. Dynamic interrogative data capture (DIDC) : concept of operations.

    DOT National Transportation Integrated Search

    2016-04-01

    This Concept of Operations (ConOps) describes the characteristics of the Dynamic Interrogative Data Capture (DIDC) algorithms and associated software. The objective of the DIDC algorithms and software is to optimize the capture and transmission of ve...

  13. Advanced Traffic Management Systems (ATMS) research analysis database system

    DOT National Transportation Integrated Search

    2001-06-01

    The ATMS Research Analysis Database Systems (ARADS) consists of a Traffic Software Data Dictionary (TSDD) and a Traffic Software Object Model (TSOM) for application to microscopic traffic simulation and signal optimization domains. The purpose of thi...

  14. j5 DNA assembly design automation.

    PubMed

    Hillson, Nathan J

    2014-01-01

    Modern standardized methodologies, described in detail in the previous chapters of this book, have enabled the software-automated design of optimized DNA construction protocols. This chapter describes how to design (combinatorial) scar-less DNA assembly protocols using the web-based software j5. j5 assists biomedical and biotechnological researchers construct DNA by automating the design of optimized protocols for flanking homology sequence as well as type IIS endonuclease-mediated DNA assembly methodologies. Unlike any other software tool available today, j5 designs scar-less combinatorial DNA assembly protocols, performs a cost-benefit analysis to identify which portions of an assembly process would be less expensive to outsource to a DNA synthesis service provider, and designs hierarchical DNA assembly strategies to mitigate anticipated poor assembly junction sequence performance. Software integrated with j5 add significant value to the j5 design process through graphical user-interface enhancement and downstream liquid-handling robotic laboratory automation.

  15. Developing interpretable models with optimized set reduction for identifying high risk software components

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.

    1993-01-01

    Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.

  16. Software Design Improvements. Part 2; Software Quality and the Design and Inspection Process

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R.; Packard, Michael H.; Ziemianski, Tom

    1997-01-01

    The application of assurance engineering techniques improves the duration of failure-free performance of software. The totality of features and characteristics of a software product are what determine its ability to satisfy customer needs. Software in safety-critical systems is very important to NASA. We follow the System Safety Working Groups definition for system safety software as: 'The optimization of system safety in the design, development, use and maintenance of software and its integration with safety-critical systems in an operational environment. 'If it is not safe, say so' has become our motto. This paper goes over methods that have been used by NASA to make software design improvements by focusing on software quality and the design and inspection process.

  17. Event Oriented Design and Adaptive Multiprocessing

    DTIC Science & Technology

    1991-08-31

    System 5 2.3 The Classification 5 2.4 Real-Time Systems 7 2.5 Non Real-Time Systems 10 2.6 Common Characterizations of all Software Systems 10 2.7... Non -Optimal Guarantee Test Theorem 37 6.3.2 Chetto’s Optimal Guarantee Test Theorem 37 6.3.3 Multistate Case: An Extended Guarantee 39 Test Theorem...which subdivides all software systems according to the way in which they operate, such as interactive, non interactive, real-time, etc. Having defined

  18. A ranking algorithm for spacelab crew and experiment scheduling

    NASA Technical Reports Server (NTRS)

    Grone, R. D.; Mathis, F. H.

    1980-01-01

    The problem of obtaining an optimal or near optimal schedule for scientific experiments to be performed on Spacelab missions is addressed. The current capabilities in this regard are examined and a method of ranking experiments in order of difficulty is developed to support the existing software. Experimental data is obtained from applying this method to the sets of experiments corresponding to Spacelab mission 1, 2, and 3. Finally, suggestions are made concerning desirable modifications and features of second generation software being developed for this problem.

  19. Recent Improvements to the Copernicus Trajectory Design and Optimization System

    NASA Technical Reports Server (NTRS)

    Williams, Jacob; Senent, Juan S.; Ocampo, Cesar; Lee, David E.

    2012-01-01

    Copernicus is a software tool for spacecraft trajectory design and optimization. The latest version (v3.0.1) was released in October 2011. It is available at no cost to NASA centers, government contractors, and organizations with a contractual affiliation with NASA. This paper is a brief overview of the recent development history of Copernicus. An overview of the evolution of the software and a discussion of significant new features and improvements is given, and how the tool is used to design spacecraft missions

  20. Optimal transfers between libration-point orbits in the elliptic restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Hiday, Lisa Ann

    1992-09-01

    A strategy is formulated to design optimal impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior L(1) libration point of the Sun-Earth/Moon barycenter system. Two methods of constructing nominal transfers, for which the fuel cost is to be minimized, are developed; both inferior and superior transfers between two halo orbits are considered. The necessary conditions for an optimal transfer trajectory are stated in terms of the primer vector. The adjoint equation relating reference and perturbed trajectories in this formulation of the elliptic restricted three-body problem is shown to be distinctly different from that obtained in the analysis of trajectories in the two-body problem. Criteria are established whereby the cost on a nominal transfer can be improved by the addition of an interior impulse or by the implementation of coastal arcs in the initial and final orbits. The necessary conditions for the local optimality of a time-fixed transfer trajectory possessing additional impulses are satisfied by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses. The optimality of a time-free transfer containing coastal arcs is surmised by examination of the slopes at the endpoints of a plot of the magnitude of the primer vector over the duration of the transfer path. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The position and timing of each interior impulse applied to a time-fixed transfer as well as the direction and length of coastal periods implemented on a time-free transfer are specified by the unconstrained minimization of the appropriate variation in cost utilizing a multivariable search technique. Although optimal solutions in some instances are elusive, the time-fixed and time-free optimization algorithms prove to be very successful in diminishing costs on nominal transfer trajectories. The inclusion of coastal arcs on time-free superior and inferior transfers results in significant modification of the transfer time of flight caused by shifts in departure and arrival locations on the halo orbits.

  1. Queue and stack sorting algorithm optimization and performance analysis

    NASA Astrophysics Data System (ADS)

    Qian, Mingzhu; Wang, Xiaobao

    2018-04-01

    Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability.

  2. Arbitrary Shape Deformation in CFD Design

    NASA Technical Reports Server (NTRS)

    Landon, Mark; Perry, Ernest

    2014-01-01

    Sculptor(R) is a commercially available software tool, based on an Arbitrary Shape Design (ASD), which allows the user to perform shape optimization for computational fluid dynamics (CFD) design. The developed software tool provides important advances in the state-of-the-art of automatic CFD shape deformations and optimization software. CFD is an analysis tool that is used by engineering designers to help gain a greater understanding of the fluid flow phenomena involved in the components being designed. The next step in the engineering design process is to then modify, the design to improve the components' performance. This step has traditionally been performed manually via trial and error. Two major problems that have, in the past, hindered the development of an automated CFD shape optimization are (1) inadequate shape parameterization algorithms, and (2) inadequate algorithms for CFD grid modification. The ASD that has been developed as part of the Sculptor(R) software tool is a major advancement in solving these two issues. First, the ASD allows the CFD designer to freely create his own shape parameters, thereby eliminating the restriction of only being able to use the CAD model parameters. Then, the software performs a smooth volumetric deformation, which eliminates the extremely costly process of having to remesh the grid for every shape change (which is how this process had previously been achieved). Sculptor(R) can be used to optimize shapes for aerodynamic and structural design of spacecraft, aircraft, watercraft, ducts, and other objects that affect and are affected by flows of fluids and heat. Sculptor(R) makes it possible to perform, in real time, a design change that would manually take hours or days if remeshing were needed.

  3. IPO: a tool for automated optimization of XCMS parameters.

    PubMed

    Libiseller, Gunnar; Dvorzak, Michaela; Kleb, Ulrike; Gander, Edgar; Eisenberg, Tobias; Madeo, Frank; Neumann, Steffen; Trausinger, Gert; Sinner, Frank; Pieber, Thomas; Magnes, Christoph

    2015-04-16

    Untargeted metabolomics generates a huge amount of data. Software packages for automated data processing are crucial to successfully process these data. A variety of such software packages exist, but the outcome of data processing strongly depends on algorithm parameter settings. If they are not carefully chosen, suboptimal parameter settings can easily lead to biased results. Therefore, parameter settings also require optimization. Several parameter optimization approaches have already been proposed, but a software package for parameter optimization which is free of intricate experimental labeling steps, fast and widely applicable is still missing. We implemented the software package IPO ('Isotopologue Parameter Optimization') which is fast and free of labeling steps, and applicable to data from different kinds of samples and data from different methods of liquid chromatography - high resolution mass spectrometry and data from different instruments. IPO optimizes XCMS peak picking parameters by using natural, stable (13)C isotopic peaks to calculate a peak picking score. Retention time correction is optimized by minimizing relative retention time differences within peak groups. Grouping parameters are optimized by maximizing the number of peak groups that show one peak from each injection of a pooled sample. The different parameter settings are achieved by design of experiments, and the resulting scores are evaluated using response surface models. IPO was tested on three different data sets, each consisting of a training set and test set. IPO resulted in an increase of reliable groups (146% - 361%), a decrease of non-reliable groups (3% - 8%) and a decrease of the retention time deviation to one third. IPO was successfully applied to data derived from liquid chromatography coupled to high resolution mass spectrometry from three studies with different sample types and different chromatographic methods and devices. We were also able to show the potential of IPO to increase the reliability of metabolomics data. The source code is implemented in R, tested on Linux and Windows and it is freely available for download at https://github.com/glibiseller/IPO . The training sets and test sets can be downloaded from https://health.joanneum.at/IPO .

  4. Impact of DNA twist accumulation on progressive helical wrapping of torsionally constrained DNA.

    PubMed

    Li, Wei; Wang, Peng-Ye; Yan, Jie; Li, Ming

    2012-11-21

    DNA wrapping is an important mechanism for chromosomal DNA packaging in cells and viruses. Previous studies of DNA wrapping have been performed mostly on torsionally unconstrained DNA, while in vivo DNA is often under torsional constraint. In this study, we extend a previously proposed theoretical model for wrapping of torsionally unconstrained DNA to a new model including the contribution of DNA twist energy, which influences DNA wrapping drastically. In particular, due to accumulation of twist energy during DNA wrapping, it predicts a finite amount of DNA that can be wrapped on a helical spool. The predictions of the new model are tested by single-molecule study of DNA wrapping under torsional constraint using magnetic tweezers. The theoretical predictions and the experimental results are consistent with each other and their implications are discussed.

  5. Constrained evolution in numerical relativity

    NASA Astrophysics Data System (ADS)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  6. Intraoperative pulmonary embolism of Harrington rod during spinal surgery: the potential dangers of rod cutting.

    PubMed

    Aylott, Caspar E W; Hassan, Kamran; McNally, Donal; Webb, John K

    2006-12-01

    This is a case report and laboratory-based biomechanics study. The objective is to report the first case of Titanium rod embolisation during scoliosis surgery into the Pulmonary artery. To investigate the potential of an unconstrained cut Titanium rod fragment to cause wounding with reference to recognised weapons. Embolisation of a foreign body to the heart is rare. Bullet embolisation to the heart and lungs is infrequently reported in the last 80 years. Iatrogenic cases of foreign body embolisation are very rare. Fifty 1-2 cm segments of Titanium rod were cut in an unconstrained manner and a novel method was used to calculate velocity. A high-speed camera (6,000 frames/s) was used to further measure velocity and study projectile motion. The wounding potential was investigated using lambs liver, high-speed photography and local dissection. Rod velocities were measured in excess of 23 m s(-1). Rods were seen to tumble end-over-end with a maximum speed of 560 revolutions/s. The maximum kinetic energy was 0.61 J which is approximately 2% that of a crossbow. This is sufficient to cause significant liver damage. The degree of surface damage and internal disruption was influenced by the orientation of the rod fragment at impact. An unconstrained cut segment of a Titanium rod has a significant potential to wound. Precautions should be taken to avoid this potentially disastrous but preventable complication.

  7. Long-term Outcome of Unconstrained Primary Total Hip Arthroplasty in Ipsilateral Residual Poliomyelitis.

    PubMed

    Buttaro, Martín A; Slullitel, Pablo A; García Mansilla, Agustín M; Carlucci, Sofía; Comba, Fernando M; Zanotti, Gerardo; Piccaluga, Francisco

    2017-03-01

    Incapacitating articular sequelae in the hip joint have been described for patients with late effects of poliomyelitis. In these patients, total hip arthroplasty (THA) has been associated with a substantial rate of dislocation. This study was conducted to evaluate the long-term clinical and radiologic outcomes of unconstrained THA in this specific group of patients. The study included 6 patients with ipsilateral polio who underwent primary THA between 1985 and 2006. Patients with polio who underwent THA on the nonparalytic limb were excluded. Mean follow-up was 119.5 months (minimum, 84 months). Clinical outcomes were evaluated with the modified Harris Hip Score (mHHS) and the visual analog scale (VAS) pain score. Radiographs were examined to identify the cause of complications and determine the need for revision surgery. All patients showed significantly better functional results when preoperative and postoperative mHHS (67.58 vs 87.33, respectively; P=.002) and VAS pain score (7.66 vs 2, respectively; P=.0003) were compared. Although 2 cases of instability were diagnosed, only 1 patient needed acetabular revision as a result of component malpositioning. None of the patients had component loosening, osteolysis, or infection. Unconstrained THA in the affected limb of patients with poliomyelitis showed favorable long-term clinical results, with improved function and pain relief. Nevertheless, instability may be a more frequent complication in this group of patients compared with the general population. [Orthopedics. 2017; 40(2):e255-e261.]. Copyright 2016, SLACK Incorporated.

  8. Ethanol self-administration in serotonin transporter knockout mice: unconstrained demand and elasticity.

    PubMed

    Lamb, R J; Daws, L C

    2013-10-01

    Low serotonin function is associated with alcoholism, leading to speculation that increasing serotonin function could decrease ethanol consumption. Mice with one or two deletions of the serotonin transporter (SERT) gene have increased extracellular serotonin. To examine the relationship between SERT genotype and motivation for alcohol, we compared ethanol self-administration in mice with zero (knockout, KO), one (HET) or two copies (WT) of the SERT gene. All three genotypes learned to self-administer ethanol. The SSRI, fluvoxamine, decreased responding for ethanol in the HET and WT, but not the KO mice. When tested under a progressive ratio schedule, KO mice had lower breakpoints than HET or WT. As work requirements were increased across sessions, behavioral economic analysis of ethanol self-administration indicated that the decreased breakpoint in KO as compared to HET or WT mice was a result of lower levels of unconstrained demand, rather than differences in elasticity, i.e. the proportional decreases in ethanol earned with increasing work requirements were similar across genotypes. The difference in unconstrained demand was unlikely to result from motor or general motivational factors, as both WT and KO mice responded at high levels for a 50% condensed milk solution. As elasticity is hypothesized to measure essential value, these results indicate that KO value ethanol similarly to WT or HET mice despite having lower break points for ethanol. © 2013 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.

  9. Assessment of patient functional performance in different knee arthroplasty designs during unconstrained squat

    PubMed Central

    Verdini, Federica; Zara, Claudio; Leo, Tommaso; Mengarelli, Alessandro; Cardarelli, Stefano; Innocenti, Bernardo

    2017-01-01

    Summary Background In this paper, squat named by Authors unconstrained because performed without constrains related to feet position, speed, knee maximum angle to be reached, was tested as motor task revealing differences in functional performance after knee arthroplasty. It involves large joints ranges of motion, does not compromise joint safety and requires accurate control strategies to maintain balance. Methods Motion capture techniques were used to study squat on a healthy control group (CTR) and on three groups, each characterised by a specific knee arthroplasty design: a Total Knee Arthroplasty (TKA), a Mobile Bearing and a Fixed Bearing Unicompartmental Knee Arthroplasty (respectively MBUA and FBUA). Squat was analysed during descent, maintenance and ascent phase and described by speed, angular kinematics of lower and upper body, the Center of Pressure (CoP) trajectory and muscle activation timing of quadriceps and biceps femoris. Results Compared to CTR, for TKA and MBUA knee maximum flexion was lower, vertical speed during descent and ascent reduced and the duration of whole movement was longer. CoP mean distance was higher for all arthroplasty groups during descent as higher was, CoP mean velocity for MBUA and TKA during ascent and descent. Conclusions Unconstrained squat is able to reveal differences in the functional performance among control and arthroplasty groups and between different arthroplasty designs. Considering the similarity index calculated for the variables showing statistically significance, FBUA performance appears to be closest to that of the CTR group. Level of evidence III a. PMID:29387646

  10. Research on mixed network architecture collaborative application model

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Zhao, Xi'an; Liang, Song

    2009-10-01

    When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.

  11. Working with invalid boundary conditions: lessons from the field for communicating about climate change with public audiences

    NASA Astrophysics Data System (ADS)

    Gunther, A.

    2015-12-01

    There is an ongoing need to communicate with public audiences about climate science, current and projected impacts, the importance of reducing greenhouse gas emissions, and the requirement to prepare for changes that are likely unavoidable. It is essential that scientists are engaged and active in this effort. Scientists can be more effective communicators about climate change to non-scientific audiences if we recognize that some of the normal "boundary conditions" under which we operate do not need to apply. From how we are trained to how we think about our audience, there are some specific skills and practices that allow us to be more effective communicators. The author will review concepts for making our communication more effective based upon his experience from over 60 presentations about climate change to public audiences. These include expressing how your knowledge makes you feel, anticipating (and accepting) questions unconstrained by physics, respecting beliefs and values while separating them from evidence, and using the history of climate science to provide a compelling narrative. Proper attention to presentation structure (particularly an opening statement), speaking techniques for audience engagement, and effective use of presentation software are also important.

  12. Development of the disable software reporting system on the basis of the neural network

    NASA Astrophysics Data System (ADS)

    Gavrylenko, S.; Babenko, O.; Ignatova, E.

    2018-04-01

    The PE structure of malicious and secure software is analyzed, features are highlighted, binary sign vectors are obtained and used as inputs for training the neural network. A software model for detecting malware based on the ART-1 neural network was developed, optimal similarity coefficients were found, and testing was performed. The obtained research results showed the possibility of using the developed system of identifying malicious software in computer systems protection systems

  13. Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 2

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr. (Compiler)

    1989-01-01

    The Control/Structures Integration Program, a survey of available software for control of flexible structures, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software are discussed.

  14. Software Issues at the User Interface

    DTIC Science & Technology

    1991-05-01

    successful integration of parallel computers into mainstream scientific computing. Clearly a compiler is the most important software tool available to a...Computer Science University of Colorado Boulder, CO 80309 ABSTRACT We review software issues that are critical to the successful integration of parallel...The development of an optimizing compiler of this quality, addressing communicaton instructions as well as computational instructions is a major

  15. Software for Alignment of Segments of a Telescope Mirror

    NASA Technical Reports Server (NTRS)

    Hall, Drew P.; Howard, Richard T.; Ly, William C.; Rakoczy, John M.; Weir, John M.

    2006-01-01

    The Segment Alignment Maintenance System (SAMS) software is designed to maintain the overall focus and figure of the large segmented primary mirror of the Hobby-Eberly Telescope. This software reads measurements made by sensors attached to the segments of the primary mirror and from these measurements computes optimal control values to send to actuators that move the mirror segments.

  16. Design sensitivity analysis and optimization tool (DSO) for sizing design applications

    NASA Technical Reports Server (NTRS)

    Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa

    1992-01-01

    The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.

  17. Control law synthesis and optimization software for large order aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.; Pototzky, A.; Noll, Thomas

    1989-01-01

    A flexible aircraft or space structure with active control is typically modeled by a large-order state space system of equations in order to accurately represent the rigid and flexible body modes, unsteady aerodynamic forces, actuator dynamics and gust spectra. The control law of this multi-input/multi-output (MIMO) system is expected to satisfy multiple design requirements on the dynamic loads, responses, actuator deflection and rate limitations, as well as maintain certain stability margins, yet should be simple enough to be implemented on an onboard digital microprocessor. A software package for performing an analog or digital control law synthesis for such a system, using optimal control theory and constrained optimization techniques is described.

  18. New software developments for quality mesh generation and optimization from biomedical imaging data.

    PubMed

    Yu, Zeyun; Wang, Jun; Gao, Zhanheng; Xu, Ming; Hoshijima, Masahiko

    2014-01-01

    In this paper we present a new software toolkit for generating and optimizing surface and volumetric meshes from three-dimensional (3D) biomedical imaging data, targeted at image-based finite element analysis of some biomedical activities in a single material domain. Our toolkit includes a series of geometric processing algorithms including surface re-meshing and quality-guaranteed tetrahedral mesh generation and optimization. All methods described have been encapsulated into a user-friendly graphical interface for easy manipulation and informative visualization of biomedical images and mesh models. Numerous examples are presented to demonstrate the effectiveness and efficiency of the described methods and toolkit. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Process Approach for Modeling of Machine and Tractor Fleet Structure

    NASA Astrophysics Data System (ADS)

    Dokin, B. D.; Aletdinova, A. A.; Kravchenko, M. S.; Tsybina, Y. S.

    2018-05-01

    The existing software complexes on modelling of the machine and tractor fleet structure are mostly aimed at solving the task of optimization. However, the creators, choosing only one optimization criterion and incorporating it in their software, provide grounds on why it is the best without giving a decision maker the opportunity to choose it for their enterprise. To analyze “bottlenecks” of machine and tractor fleet modelling, the authors of this article created a process model, in which they included adjustment to the plan of using machinery based on searching through alternative technologies. As a result, the following recommendations for software complex development have been worked out: the introduction of a database of alternative technologies; the possibility for a user to change the timing of the operations even beyond the allowable limits and in that case the calculation of the incurred loss; the possibility to rule out the solution of an optimization task, and if there is a necessity in it - the possibility to choose an optimization criterion; introducing graphical display of an annual complex of works, which could be enough for the development and adjustment of a business strategy.

  20. A method for automatically optimizing medical devices for treating heart failure: designing polymeric injection patterns.

    PubMed

    Wenk, Jonathan F; Wall, Samuel T; Peterson, Robert C; Helgerson, Sam L; Sabbah, Hani N; Burger, Mike; Stander, Nielen; Ratcliffe, Mark B; Guccione, Julius M

    2009-12-01

    Heart failure continues to present a significant medical and economic burden throughout the developed world. Novel treatments involving the injection of polymeric materials into the myocardium of the failing left ventricle (LV) are currently being developed, which may reduce elevated myofiber stresses during the cardiac cycle and act to retard the progression of heart failure. A finite element (FE) simulation-based method was developed in this study that can automatically optimize the injection pattern of the polymeric "inclusions" according to a specific objective function, using commercially available software tools. The FE preprocessor TRUEGRID((R)) was used to create a parametric axisymmetric LV mesh matched to experimentally measured end-diastole and end-systole metrics from dogs with coronary microembolization-induced heart failure. Passive and active myocardial material properties were defined by a pseudo-elastic-strain energy function and a time-varying elastance model of active contraction, respectively, that were implemented in the FE software LS-DYNA. The companion optimization software LS-OPT was used to communicate directly with TRUEGRID((R)) to determine FE model parameters, such as defining the injection pattern and inclusion characteristics. The optimization resulted in an intuitive optimal injection pattern (i.e., the one with the greatest number of inclusions) when the objective function was weighted to minimize mean end-diastolic and end-systolic myofiber stress and ignore LV stroke volume. In contrast, the optimization resulted in a nonintuitive optimal pattern (i.e., 3 inclusions longitudinallyx6 inclusions circumferentially) when both myofiber stress and stroke volume were incorporated into the objective function with different weights.

  1. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks

    PubMed Central

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  2. Neural Network and Regression Approximations in High Speed Civil Transport Aircraft Design Optimization

    NASA Technical Reports Server (NTRS)

    Patniak, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.

    1998-01-01

    Nonlinear mathematical-programming-based design optimization can be an elegant method. However, the calculations required to generate the merit function, constraints, and their gradients, which are frequently required, can make the process computational intensive. The computational burden can be greatly reduced by using approximating analyzers derived from an original analyzer utilizing neural networks and linear regression methods. The experience gained from using both of these approximation methods in the design optimization of a high speed civil transport aircraft is the subject of this paper. The Langley Research Center's Flight Optimization System was selected for the aircraft analysis. This software was exercised to generate a set of training data with which a neural network and a regression method were trained, thereby producing the two approximating analyzers. The derived analyzers were coupled to the Lewis Research Center's CometBoards test bed to provide the optimization capability. With the combined software, both approximation methods were examined for use in aircraft design optimization, and both performed satisfactorily. The CPU time for solution of the problem, which had been measured in hours, was reduced to minutes with the neural network approximation and to seconds with the regression method. Instability encountered in the aircraft analysis software at certain design points was also eliminated. On the other hand, there were costs and difficulties associated with training the approximating analyzers. The CPU time required to generate the input-output pairs and to train the approximating analyzers was seven times that required for solution of the problem.

  3. A microprocessor card software server to support the Quebec health microprocessor card project.

    PubMed

    Durant, P; Bérubé, J; Lavoie, G; Gamache, A; Ardouin, P; Papillon, M J; Fortin, J P

    1995-01-01

    The Quebec Health Smart Card Project is advocating the use of a memory card software server[1] (SCAM) to implement a portable medical record (PMR) on a smart card. The PMR is viewed as an object that can be manipulated by SCAM's services. In fact, we can talk about a pseudo-object-oriented approach. This software architecture provides a flexible and evolutive way to manage and optimize the PMR. SCAM is a generic software server; it can manage smart cards as well as optical (laser) cards or other types of memory cards. But, in the specific case of the Quebec Health Card Project, SCAM is used to provide services between physicians' or pharmacists' software and IBM smart card technology. We propose to expose the concepts and techniques used to provide a generic environment to deal with smart cards (and more generally with memory cards), to obtain a dynamic an evolutive PMR, to raise the system global security level and the data integrity, to optimize significantly the management of the PMR, and to provide statistic information about the use of the PMR.

  4. Implementation of a software for REmote COMparison of PARticlE and photon treatment plans: ReCompare.

    PubMed

    Löck, Steffen; Roth, Klaus; Skripcak, Tomas; Worbs, Mario; Helmbrecht, Stephan; Jakobi, Annika; Just, Uwe; Krause, Mechthild; Baumann, Michael; Enghardt, Wolfgang; Lühr, Armin

    2015-09-01

    To guarantee equal access to optimal radiotherapy, a concept of patient assignment to photon or particle radiotherapy using remote treatment plan exchange and comparison - ReCompare - was proposed. We demonstrate the implementation of this concept and present its clinical applicability. The ReCompare concept was implemented using a client-server based software solution. A clinical workflow for the remote treatment plan exchange and comparison was defined. The steps required by the user and performed by the software for a complete plan transfer were described and an additional module for dose-response modeling was added. The ReCompare software was successfully tested in cooperation with three external partner clinics and worked meeting all required specifications. It was compatible with several standard treatment planning systems, ensured patient data protection, and integrated in the clinical workflow. The ReCompare software can be applied to support non-particle radiotherapy institutions with the patient-specific treatment decision on the optimal irradiation modality by remote treatment plan exchange and comparison. Copyright © 2015. Published by Elsevier GmbH.

  5. NREL Software Models Performance of Wind Plants (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2015-01-01

    This NREL Highlight is being developed for the 2015 February Alliance S&T Meeting, and describes NREL's Simulator for Offshore Wind Farm Applications (SOWFA) software in collaboration with Norway-based Statoil, to optimize layouts and controls of wind plants arrays.

  6. Spacecraft Trajectory Analysis and Mission Planning Simulation (STAMPS) Software

    NASA Technical Reports Server (NTRS)

    Puckett, Nancy; Pettinger, Kris; Hallstrom,John; Brownfield, Dana; Blinn, Eric; Williams, Frank; Wiuff, Kelli; McCarty, Steve; Ramirez, Daniel; Lamotte, Nicole; hide

    2014-01-01

    STAMPS simulates either three- or six-degree-of-freedom cases for all spacecraft flight phases using translated HAL flight software or generic GN&C models. Single or multiple trajectories can be simulated for use in optimization and dispersion analysis. It includes math models for the vehicle and environment, and currently features a "C" version of shuttle onboard flight software. The STAMPS software is used for mission planning and analysis within ascent/descent, rendezvous, proximity operations, and navigation flight design areas.

  7. Noise tolerant illumination optimization applied to display devices

    NASA Astrophysics Data System (ADS)

    Cassarly, William J.; Irving, Bruce

    2005-02-01

    Display devices have historically been designed through an iterative process using numerous hardware prototypes. This process is effective but the number of iterations is limited by the time and cost to make the prototypes. In recent years, virtual prototyping using illumination software modeling tools has replaced many of the hardware prototypes. Typically, the designer specifies the design parameters, builds the software model, predicts the performance using a Monte Carlo simulation, and uses the performance results to repeat this process until an acceptable design is obtained. What is highly desired, and now possible, is to use illumination optimization to automate the design process. Illumination optimization provides the ability to explore a wider range of design options while also providing improved performance. Since Monte Carlo simulations are often used to calculate the system performance but those predictions have statistical uncertainty, the use of noise tolerant optimization algorithms is important. The use of noise tolerant illumination optimization is demonstrated by considering display device designs that extract light using 2D paint patterns as well as 3D textured surfaces. A hybrid optimization approach that combines a mesh feedback optimization with a classical optimizer is demonstrated. Displays with LED sources and cold cathode fluorescent lamps are considered.

  8. Towards a high performance geometry library for particle-detector simulations

    DOE PAGES

    Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...

    2015-05-22

    Thread-parallelization and single-instruction multiple data (SIMD) ”vectorisation” of software components in HEP computing has become a necessity to fully benefit from current and future computing hardware. In this context, the Geant-Vector/GPU simulation project aims to re-engineer current software for the simulation of the passage of particles through detectors in order to increase the overall event throughput. As one of the core modules in this area, the geometry library plays a central role and vectorising its algorithms will be one of the cornerstones towards achieving good CPU performance. Here, we report on the progress made in vectorising the shape primitives, asmore » well as in applying new C++ template based optimizations of existing code available in the Geant4, ROOT or USolids geometry libraries. We will focus on a presentation of our software development approach that aims to provide optimized code for all use cases of the library (e.g., single particle and many-particle APIs) and to support different architectures (CPU and GPU) while keeping the code base small, manageable and maintainable. We report on a generic and templated C++ geometry library as a continuation of the AIDA USolids project. As a result, the experience gained with these developments will be beneficial to other parts of the simulation software, such as for the optimization of the physics library, and possibly to other parts of the experiment software stack, such as reconstruction and analysis.« less

  9. Towards a high performance geometry library for particle-detector simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apostolakis, J.; Bandieramonte, M.; Bitzes, G.

    Thread-parallelization and single-instruction multiple data (SIMD) ”vectorisation” of software components in HEP computing has become a necessity to fully benefit from current and future computing hardware. In this context, the Geant-Vector/GPU simulation project aims to re-engineer current software for the simulation of the passage of particles through detectors in order to increase the overall event throughput. As one of the core modules in this area, the geometry library plays a central role and vectorising its algorithms will be one of the cornerstones towards achieving good CPU performance. Here, we report on the progress made in vectorising the shape primitives, asmore » well as in applying new C++ template based optimizations of existing code available in the Geant4, ROOT or USolids geometry libraries. We will focus on a presentation of our software development approach that aims to provide optimized code for all use cases of the library (e.g., single particle and many-particle APIs) and to support different architectures (CPU and GPU) while keeping the code base small, manageable and maintainable. We report on a generic and templated C++ geometry library as a continuation of the AIDA USolids project. As a result, the experience gained with these developments will be beneficial to other parts of the simulation software, such as for the optimization of the physics library, and possibly to other parts of the experiment software stack, such as reconstruction and analysis.« less

  10. Weighted SGD for ℓ p Regression with Randomized Preconditioning.

    PubMed

    Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W

    2016-01-01

    In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. We aim to bridge the gap between these two methods in solving constrained overdetermined linear regression problems-e.g., ℓ 2 and ℓ 1 regression problems. We propose a hybrid algorithm named pwSGD that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system.By rewriting a deterministic ℓ p regression problem as a stochastic optimization problem, we connect pwSGD to several existing ℓ p solvers including RLA methods with algorithmic leveraging (RLA for short).We prove that pwSGD inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity. Such SGD convergence rates are superior to other related SGD algorithm such as the weighted randomized Kaczmarz algorithm.Particularly, when solving ℓ 1 regression with size n by d , pwSGD returns an approximate solution with ε relative error in the objective value in (log n ·nnz( A )+poly( d )/ ε 2 ) time. This complexity is uniformly better than that of RLA methods in terms of both ε and d when the problem is unconstrained. In the presence of constraints, pwSGD only has to solve a sequence of much simpler and smaller optimization problem over the same constraints. In general this is more efficient than solving the constrained subproblem required in RLA.For ℓ 2 regression, pwSGD returns an approximate solution with ε relative error in the objective value and the solution vector measured in prediction norm in (log n ·nnz( A )+poly( d ) log(1/ ε )/ ε ) time. We show that for unconstrained ℓ 2 regression, this complexity is comparable to that of RLA and is asymptotically better over several state-of-the-art solvers in the regime where the desired accuracy ε , high dimension n and low dimension d satisfy d ≥ 1/ ε and n ≥ d 2 / ε . We also provide lower bounds on the coreset complexity for more general regression problems, indicating that still new ideas will be needed to extend similar RLA preconditioning ideas to weighted SGD algorithms for more general regression problems. Finally, the effectiveness of such algorithms is illustrated numerically on both synthetic and real datasets, and the results are consistent with our theoretical findings and demonstrate that pwSGD converges to a medium-precision solution, e.g., ε = 10 -3 , more quickly.

  11. Weighted SGD for ℓp Regression with Randomized Preconditioning*

    PubMed Central

    Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W.

    2018-01-01

    In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. We aim to bridge the gap between these two methods in solving constrained overdetermined linear regression problems—e.g., ℓ2 and ℓ1 regression problems. We propose a hybrid algorithm named pwSGD that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system.By rewriting a deterministic ℓp regression problem as a stochastic optimization problem, we connect pwSGD to several existing ℓp solvers including RLA methods with algorithmic leveraging (RLA for short).We prove that pwSGD inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity. Such SGD convergence rates are superior to other related SGD algorithm such as the weighted randomized Kaczmarz algorithm.Particularly, when solving ℓ1 regression with size n by d, pwSGD returns an approximate solution with ε relative error in the objective value in 𝒪(log n·nnz(A)+poly(d)/ε2) time. This complexity is uniformly better than that of RLA methods in terms of both ε and d when the problem is unconstrained. In the presence of constraints, pwSGD only has to solve a sequence of much simpler and smaller optimization problem over the same constraints. In general this is more efficient than solving the constrained subproblem required in RLA.For ℓ2 regression, pwSGD returns an approximate solution with ε relative error in the objective value and the solution vector measured in prediction norm in 𝒪(log n·nnz(A)+poly(d) log(1/ε)/ε) time. We show that for unconstrained ℓ2 regression, this complexity is comparable to that of RLA and is asymptotically better over several state-of-the-art solvers in the regime where the desired accuracy ε, high dimension n and low dimension d satisfy d ≥ 1/ε and n ≥ d2/ε. We also provide lower bounds on the coreset complexity for more general regression problems, indicating that still new ideas will be needed to extend similar RLA preconditioning ideas to weighted SGD algorithms for more general regression problems. Finally, the effectiveness of such algorithms is illustrated numerically on both synthetic and real datasets, and the results are consistent with our theoretical findings and demonstrate that pwSGD converges to a medium-precision solution, e.g., ε = 10−3, more quickly. PMID:29782626

  12. Computer-intensive simulation of solid-state NMR experiments using SIMPSON.

    PubMed

    Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas

    2014-09-01

    Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Lean and Efficient Software: Whole-Program Optimization of Executables

    DTIC Science & Technology

    2013-01-03

    staffing for the project  Implementing the necessary infrastructure ( testing, performance evaluation, needed support software, bug and issue...in the SOW The result of the planning discussions is shown in the milestone table (section 6). In addition, we selected appropriate engineering

  14. Information technologies in optimization process of monitoring of software and hardware status

    NASA Astrophysics Data System (ADS)

    Nikitin, P. V.; Savinov, A. N.; Bazhenov, R. I.; Ryabov, I. V.

    2018-05-01

    The article describes a model of a hardware and software monitoring system for a large company that provides customers with software as a service (SaaS solution) using information technology. The main functions of the monitoring system are: provision of up-todate data for analyzing the state of the IT infrastructure, rapid detection of the fault and its effective elimination. The main risks associated with the provision of these services are described; the comparative characteristics of the software are given; author's methods of monitoring the status of software and hardware are proposed.

  15. Fast and accurate modeling of stray light in optical systems

    NASA Astrophysics Data System (ADS)

    Perrin, Jean-Claude

    2017-11-01

    The first problem to be solved in most optical designs with respect to stray light is that of internal reflections on the several surfaces of individual lenses and mirrors, and on the detector itself. The level of stray light ratio can be considerably reduced by taking into account the stray light during the optimization to determine solutions in which the irradiance due to these ghosts is kept to the minimum possible value. Unhappily, the routines available in most optical design software's, for example CODE V, do not permit all alone to make exact quantitative calculations of the stray light due to these ghosts. Therefore, the engineer in charge of the optical design is confronted to the problem of using two different software's, one for the design and optimization, for example CODE V, one for stray light analysis, for example ASAP. This makes a complete optimization very complex . Nevertheless, using special techniques and combinations of the routines available in CODE V, it is possible to have at its disposal a software macro tool to do such an analysis quickly and accurately, including Monte-Carlo ray tracing, or taking into account diffraction effects. This analysis can be done in a few minutes, to be compared to hours with other software's.

  16. Time cycle analysis and simulation of material flow in MOX process layout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, S.; Saraswat, A.; Danny, K.M.

    The (U,Pu)O{sub 2} MOX fuel is the driver fuel for the upcoming PFBR (Prototype Fast Breeder Reactor). The fuel has around 30% PuO{sub 2}. The presence of high percentages of reprocessed PuO{sub 2} necessitates the design of optimized fuel fabrication process line which will address both production need as well as meet regulatory norms regarding radiological safety criteria. The powder pellet route has highly unbalanced time cycle. This difficulty can be overcome by optimizing process layout in terms of equipment redundancy and scheduling of input powder batches. Different schemes are tested before implementing in the process line with the helpmore » of a software. This software simulates the material movement through the optimized process layout. The different material processing schemes have been devised and validity of the schemes are tested with the software. Schemes in which production batches are meeting at any glove box location are considered invalid. A valid scheme ensures adequate spacing between the production batches and at the same time it meets the production target. This software can be further improved by accurately calculating material movement time through glove box train. One important factor is considering material handling time with automation systems in place.« less

  17. Light extraction efficiency analysis of GaN-based light-emitting diodes with nanopatterned sapphire substrates.

    PubMed

    Pan, Jui-Wen; Tsai, Pei-Jung; Chang, Kao-Der; Chang, Yung-Yuan

    2013-03-01

    In this paper, we propose a method to analyze the light extraction efficiency (LEE) enhancement of a nanopatterned sapphire substrates (NPSS) light-emitting diode (LED) by comparing wave optics software with ray optics software. Finite-difference time-domain (FDTD) simulations represent the wave optics software and Light Tools (LTs) simulations represent the ray optics software. First, we find the trends of and an optimal solution for the LEE enhancement when the 2D-FDTD simulations are used to save on simulation time and computational memory. The rigorous coupled-wave analysis method is utilized to explain the trend we get from the 2D-FDTD algorithm. The optimal solution is then applied in 3D-FDTD and LTs simulations. The results are similar and the difference in LEE enhancement between the two simulations does not exceed 8.5% in the small LED chip area. More than 10(4) times computational memory is saved during the LTs simulation in comparison to the 3D-FDTD simulation. Moreover, LEE enhancement from the side of the LED can be obtained in the LTs simulation. An actual-size NPSS LED is simulated using the LTs. The results show a more than 307% improvement in the total LEE enhancement of the NPSS LED with the optimal solution compared to the conventional LED.

  18. Pareto-optimal reversed-phase chromatography separation of three insulin variants with a solubility constraint.

    PubMed

    Arkell, Karolina; Knutson, Hans-Kristian; Frederiksen, Søren S; Breil, Martin P; Nilsson, Bernt

    2018-01-12

    With the shift of focus of the regulatory bodies, from fixed process conditions towards flexible ones based on process understanding, model-based optimization is becoming an important tool for process development within the biopharmaceutical industry. In this paper, a multi-objective optimization study of separation of three insulin variants by reversed-phase chromatography (RPC) is presented. The decision variables were the load factor, the concentrations of ethanol and KCl in the eluent, and the cut points for the product pooling. In addition to the purity constraints, a solubility constraint on the total insulin concentration was applied. The insulin solubility is a function of the ethanol concentration in the mobile phase, and the main aim was to investigate the effect of this constraint on the maximal productivity. Multi-objective optimization was performed with and without the solubility constraint, and visualized as Pareto fronts, showing the optimal combinations of the two objectives productivity and yield for each case. Comparison of the constrained and unconstrained Pareto fronts showed that the former diverges when the constraint becomes active, because the increase in productivity with decreasing yield is almost halted. Consequently, we suggest the operating point at which the total outlet concentration of insulin reaches the solubility limit as the most suitable one. According to the results from the constrained optimizations, the maximal productivity on the C 4 adsorbent (0.41 kg/(m 3  column h)) is less than half of that on the C 18 adsorbent (0.87 kg/(m 3  column h)). This is partly caused by the higher selectivity between the insulin variants on the C 18 adsorbent, but the main reason is the difference in how the solubility constraint affects the processes. Since the optimal ethanol concentration for elution on the C 18 adsorbent is higher than for the C 4 one, the insulin solubility is also higher, allowing a higher pool concentration. An alternative method of finding the suggested operating point was also evaluated, and it was shown to give very satisfactory results for well-mapped Pareto fronts. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. The Fisher-Markov selector: fast selecting maximally separable feature subset for multiclass classification with applications to high-dimensional data.

    PubMed

    Cheng, Qiang; Zhou, Hongbo; Cheng, Jie

    2011-06-01

    Selecting features for multiclass classification is a critically important task for pattern recognition and machine learning applications. Especially challenging is selecting an optimal subset of features from high-dimensional data, which typically have many more variables than observations and contain significant noise, missing components, or outliers. Existing methods either cannot handle high-dimensional data efficiently or scalably, or can only obtain local optimum instead of global optimum. Toward the selection of the globally optimal subset of features efficiently, we introduce a new selector--which we call the Fisher-Markov selector--to identify those features that are the most useful in describing essential differences among the possible groups. In particular, in this paper we present a way to represent essential discriminating characteristics together with the sparsity as an optimization objective. With properly identified measures for the sparseness and discriminativeness in possibly high-dimensional settings, we take a systematic approach for optimizing the measures to choose the best feature subset. We use Markov random field optimization techniques to solve the formulated objective functions for simultaneous feature selection. Our results are noncombinatorial, and they can achieve the exact global optimum of the objective function for some special kernels. The method is fast; in particular, it can be linear in the number of features and quadratic in the number of observations. We apply our procedure to a variety of real-world data, including mid--dimensional optical handwritten digit data set and high-dimensional microarray gene expression data sets. The effectiveness of our method is confirmed by experimental results. In pattern recognition and from a model selection viewpoint, our procedure says that it is possible to select the most discriminating subset of variables by solving a very simple unconstrained objective function which in fact can be obtained with an explicit expression.

  20. CometBoards Users Manual Release 1.0

    NASA Technical Reports Server (NTRS)

    Guptill, James D.; Coroneos, Rula M.; Patnaik, Surya N.; Hopkins, Dale A.; Berke, Lazlo

    1996-01-01

    Several nonlinear mathematical programming algorithms for structural design applications are available at present. These include the sequence of unconstrained minimizations technique, the method of feasible directions, and the sequential quadratic programming technique. The optimality criteria technique and the fully utilized design concept are two other structural design methods. A project was undertaken to bring all these design methods under a common computer environment so that a designer can select any one of these tools that may be suitable for his/her application. To facilitate selection of a design algorithm, to validate and check out the computer code, and to ascertain the relative merits of the design tools, modest finite element structural analysis programs based on the concept of stiffness and integrated force methods have been coupled to each design method. The code that contains both these design and analysis tools, by reading input information from analysis and design data files, can cast the design of a structure as a minimum-weight optimization problem. The code can then solve it with a user-specified optimization technique and a user-specified analysis method. This design code is called CometBoards, which is an acronym for Comparative Evaluation Test Bed of Optimization and Analysis Routines for the Design of Structures. This manual describes for the user a step-by-step procedure for setting up the input data files and executing CometBoards to solve a structural design problem. The manual includes the organization of CometBoards; instructions for preparing input data files; the procedure for submitting a problem; illustrative examples; and several demonstration problems. A set of 29 structural design problems have been solved by using all the optimization methods available in CometBoards. A summary of the optimum results obtained for these problems is appended to this users manual. CometBoards, at present, is available for Posix-based Cray and Convex computers, Iris and Sun workstations, and the VM/CMS system.

  1. Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions

    NASA Astrophysics Data System (ADS)

    Ilgen, Marc R.

    This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value of the flight path angle. A summary of performance results for all these guidance laws is presented in the fifth part of this thesis along with recommendations for further research.

  2. Constrained Multipoint Aerodynamic Shape Optimization Using an Adjoint Formulation and Parallel Computers

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony; Alonso, Juan Jose; Rimlinger, Mark J.; Saunders, David

    1997-01-01

    An aerodynamic shape optimization method that treats the design of complex aircraft configurations subject to high fidelity computational fluid dynamics (CFD), geometric constraints and multiple design points is described. The design process will be greatly accelerated through the use of both control theory and distributed memory computer architectures. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on a higher order CFD method. In order to facilitate the integration of these high fidelity CFD approaches into future multi-disciplinary optimization (NW) applications, new methods must be developed which are capable of simultaneously addressing complex geometries, multiple objective functions, and geometric design constraints. In our earlier studies, we coupled the adjoint based design formulations with unconstrained optimization algorithms and showed that the approach was effective for the aerodynamic design of airfoils, wings, wing-bodies, and complex aircraft configurations. In many of the results presented in these earlier works, geometric constraints were satisfied either by a projection into feasible space or by posing the design space parameterization such that it automatically satisfied constraints. Furthermore, with the exception of reference 9 where the second author initially explored the use of multipoint design in conjunction with adjoint formulations, our earlier works have focused on single point design efforts. Here we demonstrate that the same methodology may be extended to treat complete configuration designs subject to multiple design points and geometric constraints. Examples are presented for both transonic and supersonic configurations ranging from wing alone designs to complex configuration designs involving wing, fuselage, nacelles and pylons.

  3. Optimal Battery Utilization Over Lifetime for Parallel Hybrid Electric Vehicle to Maximize Fuel Economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patil, Chinmaya; Naghshtabrizi, Payam; Verma, Rajeev

    This paper presents a control strategy to maximize fuel economy of a parallel hybrid electric vehicle over a target life of the battery. Many approaches to maximizing fuel economy of parallel hybrid electric vehicle do not consider the effect of control strategy on the life of the battery. This leads to an oversized and underutilized battery. There is a trade-off between how aggressively to use and 'consume' the battery versus to use the engine and consume fuel. The proposed approach addresses this trade-off by exploiting the differences in the fast dynamics of vehicle power management and slow dynamics of batterymore » aging. The control strategy is separated into two parts, (1) Predictive Battery Management (PBM), and (2) Predictive Power Management (PPM). PBM is the higher level control with slow update rate, e.g. once per month, responsible for generating optimal set points for PPM. The considered set points in this paper are the battery power limits and State Of Charge (SOC). The problem of finding the optimal set points over the target battery life that minimize engine fuel consumption is solved using dynamic programming. PPM is the lower level control with high update rate, e.g. a second, responsible for generating the optimal HEV energy management controls and is implemented using model predictive control approach. The PPM objective is to find the engine and battery power commands to achieve the best fuel economy given the battery power and SOC constraints imposed by PBM. Simulation results with a medium duty commercial hybrid electric vehicle and the proposed two-level hierarchical control strategy show that the HEV fuel economy is maximized while meeting a specified target battery life. On the other hand, the optimal unconstrained control strategy achieves marginally higher fuel economy, but fails to meet the target battery life.« less

  4. Modelling Schumann resonances from ELF measurements using non-linear optimization methods

    NASA Astrophysics Data System (ADS)

    Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo

    2017-04-01

    Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained optimization methods applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as optimization process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The optimization methods that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different methods fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the optimization method; iii) Gradient method has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate method and Cuasi-Newton method give similar results (Newton method presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has been found.

  5. Student project of optical system analysis API-library development

    NASA Astrophysics Data System (ADS)

    Ivanova, Tatiana; Zhukova, Tatiana; Dantcaranov, Ruslan; Romanova, Maria; Zhadin, Alexander; Ivanov, Vyacheslav; Kalinkina, Olga

    2017-08-01

    In the paper API-library software developed by students of Applied and Computer Optics Department (ITMO University) for optical system design is presented. The library performs paraxial and real ray tracing, calculates 3d order (Seidel) aberration and real ray aberration of axis and non-axis beams (wave, lateral, longitudinal, coma, distortion etc.) and finally, approximate wave aberration by Zernike polynomials. Real aperture can be calculated by considering of real rays tracing failure on each surface. So far we assume optical system is centered, with spherical or 2d order aspherical surfaces. Optical glasses can be set directly by refraction index or by dispersion coefficients. The library can be used for education or research purposes in optical system design area. It provides ready to use software functions for optical system simulation and analysis that developer can simply plug into their software development for different purposes, for example for some specific synthesis tasks or investigation of new optimization modes. In the paper we present an example of using the library for development of cemented doublet synthesis software based on Slusarev's methodology. The library is used in optical system optimization recipes course for deep studying of optimization model and its application for optical system design. Development of such software is an excellent experience for students and help to understanding optical image modeling and quality analysis. This development is organized as student group joint project. We try to organize it as a group in real research and development project, so each student has his own role in the project and then use whole library functionality in his own master or bachelor thesis. Working in such group gives students useful experience and opportunity to work as research and development engineer of scientific software in the future.

  6. Multiobjective optimization of hybrid regenerative life support technologies. Topic D: Technology Assessment

    NASA Technical Reports Server (NTRS)

    Manousiouthakis, Vasilios

    1995-01-01

    We developed simple mathematical models for many of the technologies constituting the water reclamation system in a space station. These models were employed for subsystem optimization and for the evaluation of the performance of individual water reclamation technologies, by quantifying their operational 'cost' as a linear function of weight, volume, and power consumption. Then we performed preliminary investigations on the performance improvements attainable by simple hybrid systems involving parallel combinations of technologies. We are developing a software tool for synthesizing a hybrid water recovery system (WRS) for long term space missions. As conceptual framework, we are employing the state space approach. Given a number of available technologies and the mission specifications, the state space approach would help design flowsheets featuring optimal process configurations, including those that feature stream connections in parallel, series, or recycles. We visualize this software tool to function as follows: given the mission duration, the crew size, water quality specifications, and the cost coefficients, the software will synthesize a water recovery system for the space station. It should require minimal user intervention. The following tasks need to be solved for achieving this goal: (1) formulate a problem statement that will be used to evaluate the advantages of a hybrid WRS over a single technology WBS; (2) model several WRS technologies that can be employed in the space station; (3) propose a recycling network design methodology (since the WRS synthesis task is a recycling network design problem, it is essential to employ a systematic method in synthesizing this network); (4) develop a software implementation for this design methodology, design a hybrid system using this software, and compare the resulting WRS with a base-case WRS; and (5) create a user-friendly interface for this software tool.

  7. Plateletpheresis efficiency and mathematical correction of software-derived platelet yield prediction: A linear regression and ROC modeling approach.

    PubMed

    Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David

    2017-10-01

    Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P < .001). Means of software machine-derived values differed significantly from actual PLT yield, 4.72 × 10 11 vs.6.12 × 10 11 , respectively, (P < .001). The following equation was developed to adjust these values: actual PLT yield= 0.221 + (1.254 × theoretical platelet yield). ROC curve model showed an optimal apheresis device software prediction cut-off of 4.65 × 10 11 to obtain a DP, with a sensitivity of 82.2%, specificity of 93.3%, and an area under the curve (AUC) of 0.909. Trima Accel v6.0 software consistently underestimated PLT yields. Simple correction derived from linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc.

  8. Meta-Analysis inside and outside Particle Physics: Convergence Using the Path of Least Resistance?

    ERIC Educational Resources Information Center

    Jackson, Dan; Baker, Rose

    2013-01-01

    In this note, we explain how the method proposed by Hartung and Knapp provides a compromise between conventional meta-analysis methodology and "unconstrained averaging", as used by the Particle Data Group.

  9. A wearable computing platform for developing cloud-based machine learning models for health monitoring applications.

    PubMed

    Patel, Shyamal; McGinnis, Ryan S; Silva, Ikaro; DiCristofaro, Steve; Mahadevan, Nikhil; Jortberg, Elise; Franco, Jaime; Martin, Albert; Lust, Joseph; Raj, Milan; McGrane, Bryan; DePetrillo, Paolo; Aranyosi, A J; Ceruolo, Melissa; Pindado, Jesus; Ghaffari, Roozbeh

    2016-08-01

    Wearable sensors have the potential to enable clinical-grade ambulatory health monitoring outside the clinic. Technological advances have enabled development of devices that can measure vital signs with great precision and significant progress has been made towards extracting clinically meaningful information from these devices in research studies. However, translating measurement accuracies achieved in the controlled settings such as the lab and clinic to unconstrained environments such as the home remains a challenge. In this paper, we present a novel wearable computing platform for unobtrusive collection of labeled datasets and a new paradigm for continuous development, deployment and evaluation of machine learning models to ensure robust model performance as we transition from the lab to home. Using this system, we train activity classification models across two studies and track changes in model performance as we go from constrained to unconstrained settings.

  10. Unconstrained tripolar implants for primary total hip arthroplasty in patients at risk for dislocation.

    PubMed

    Guyen, Olivier; Pibarot, Vincent; Vaz, Gualter; Chevillotte, Christophe; Carret, Jean-Paul; Bejui-Hugues, Jacques

    2007-09-01

    We performed a retrospective study on 167 primary total hip arthroplasty (THA) procedures in 163 patients at high risk for instability to assess the reliability of unconstrained tripolar implants (press-fit outer metal shell articulating a bipolar polyethylene component) in preventing dislocations. Eighty-four percent of the patients had at least 2 risk factors for dislocation. The mean follow-up length was 40.2 months. No dislocation was observed. Harris hip scores improved significantly. Six hips were revised, and no aseptic loosening of the cup was observed. The tripolar implant was extremely successful in achieving stability. However, because of the current lack of data documenting polyethylene wear at additional bearing, the routine use of tripolar implants in primary THA is discouraged and should be considered at the present time only for selected patients at high risk for dislocation and with limited activities.

  11. Direct brain recordings reveal hippocampal rhythm underpinnings of language processing.

    PubMed

    Piai, Vitória; Anderson, Kristopher L; Lin, Jack J; Dewar, Callum; Parvizi, Josef; Dronkers, Nina F; Knight, Robert T

    2016-10-04

    Language is classically thought to be supported by perisylvian cortical regions. Here we provide intracranial evidence linking the hippocampal complex to linguistic processing. We used direct recordings from the hippocampal structures to investigate whether theta oscillations, pivotal in memory function, track the amount of contextual linguistic information provided in sentences. Twelve participants heard sentences that were either constrained ("She locked the door with the") or unconstrained ("She walked in here with the") before presentation of the final word ("key"), shown as a picture that participants had to name. Hippocampal theta power increased for constrained relative to unconstrained contexts during sentence processing, preceding picture presentation. Our study implicates hippocampal theta oscillations in a language task using natural language associations that do not require memorization. These findings reveal that the hippocampal complex contributes to language in an active fashion, relating incoming words to stored semantic knowledge, a necessary process in the generation of sentence meaning.

  12. Beyond the group mind: a quantitative review of the interindividual-intergroup discontinuity effect.

    PubMed

    Wildschut, Tim; Pinter, Brad; Vevea, Jack L; Insko, Chester A; Schopler, John

    2003-09-01

    This quantitative review of 130 comparisons of interindividual and intergroup interactions in the context of mixed-motive situations reveals that intergroup interactions are generally more competitive than interindividual interactions. The authors identify 4 moderators of this interindividual-intergroup discontinuity effect, each based on the theoretical perspective that the discontinuity effect flows from greater fear and greed in intergroup relative to interindividual interactions. Results reveal that each moderator shares a unique association with the magnitude of the discontinuity effect. The discontinuity effect is larger when (a) participants interact with an opponent whose behavior is unconstrained by the experimenter or constrained by the experimenter to be cooperative rather than constrained by the experimenter to be reciprocal, (b) group members make a group decision rather than individual decisions, (c) unconstrained communication between participants is present rather than absent, and (d) conflict of interest is severe rather than mild.

  13. Single Crystals Grown Under Unconstrained Conditions

    NASA Astrophysics Data System (ADS)

    Sunagawa, Ichiro

    Based on detailed investigations on morphology (evolution and variation in external forms), surface microtopography of crystal faces (spirals and etch figures), internal morphology (growth sectors, growth banding and associated impurity partitioning) and perfection (dislocations and other lattice defects) in single crystals, we can deduce how and by what mechanism the crystal grew and experienced fluctuation in growth parameters through its growth and post-growth history under unconstrained condition. The information is useful not only in finding appropriate way to growing highly perfect and homogeneous single crystals, but also in deciphering letters sent from the depth of the Earth and the Space. It is also useful in discriminating synthetic from natural gemstones. In this chapter, available methods to obtain molecular information are briefly summarized, and actual examples to demonstrate the importance of this type of investigations are selected from both natural minerals (diamond, quartz, hematite, corundum, beryl, phlogopite) and synthetic crystals (SiC, diamond, corundum, beryl).

  14. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  15. Solution of monotone complementarity and general convex programming problems using a modified potential reduction interior point method

    DOE PAGES

    Huang, Kuo -Ling; Mehrotra, Sanjay

    2016-11-08

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  16. Hybrid Energy System Design of Micro Hydro-PV-biogas Based Micro-grid

    NASA Astrophysics Data System (ADS)

    Nishrina; Abdullah, A. G.; Risdiyanto, A.; Nandiyanto, ABD

    2017-03-01

    Hybrid renewable energy system is an arrangement of one or more sources of renewable energy and also conventional energy. This paper describes a simulation results of hybrid renewable power system based on the available potential in an educational institution in Indonesia. HOMER software was used to simulate and analyse both in terms of optimization and economic terms. This software was developed through 3 main principles; simulation, optimization, and sensitivity analysis. Generally, the presented results show that the software can demonstrate a feasible hybrid power system as well to be realized. The entire demand in case study area can be supplied by the system configuration and can be met by ¾ of electricity production. So, there are ¼ of generated energy became an excess electricity.

  17. Recursive Optimization of Digital Circuits

    DTIC Science & Technology

    1990-12-14

    Obverse- Specification . . . A-23 A.14 Non-MDS Optimization of SAMPLE .. .. .. .. .. .. ..... A-24 Appendix B . BORIS Recursive Optimization System...Software ...... B -i B .1 DESIGN.S File . .... .. .. .. .. .. .. .. .. .. ... ... B -2 B .2 PARSE.S File. .. .. .. .. .. .. .. .. ... .. ... .... B -1i B .3...TABULAR.S File. .. .. .. .. .. .. ... .. ... .. ... B -22 B .4 MDS.S File. .. .. .. .. .. .. .. ... .. ... .. ...... B -28 B .5 COST.S File

  18. Application of Layered Perforation Profile Control Technique to Low Permeable Reservoir

    NASA Astrophysics Data System (ADS)

    Wei, Sun

    2018-01-01

    it is difficult to satisfy the demand of profile control of complex well section and multi-layer reservoir by adopting the conventional profile control technology, therefore, a research is conducted on adjusting the injection production profile with layered perforating parameters optimization. i.e. in the case of coproduction for multi-layer, water absorption of each layer is adjusted by adjusting the perforating parameters, thus to balance the injection production profile of the whole well section, and ultimately enhance the oil displacement efficiency of water flooding. By applying the relationship between oil-water phase percolation theory/perforating damage and capacity, a mathematic model of adjusting the injection production profile with layered perforating parameters optimization, besides, perforating parameters optimization software is programmed. Different types of optimization design work are carried out according to different geological conditions and construction purposes by using the perforating optimization design software; furthermore, an application test is done for low permeable reservoir, and the water injection profile tends to be balanced significantly after perforation with optimized parameters, thereby getting a good application effect on site.

  19. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  20. Inclusion of LCCA in Alaska flexible pavement design software manual.

    DOT National Transportation Integrated Search

    2012-10-01

    Life cycle cost analysis is a key part for selecting materials and techniques that optimize the service life of a pavement in terms of cost and performance. While the Alaska : Flexible Pavement Design software has been in use since 2004, there is no ...

  1. Scaling Watershed Models: Modern Approaches to Science Computation with MapReduce, Parallelization, and Cloud Optimization

    EPA Science Inventory

    Environmental models are products of the computer architecture and software tools available at the time of development. Scientifically sound algorithms may persist in their original state even as system architectures and software development approaches evolve and progress. Dating...

  2. Scout: high-performance heterogeneous computing made simple

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jablin, James; Mc Cormick, Patrick; Herlihy, Maurice

    2011-01-26

    Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focusmore » on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.« less

  3. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 2; Preliminary Results

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.

  4. Models and algorithm of optimization launch and deployment of virtual network functions in the virtual data center

    NASA Astrophysics Data System (ADS)

    Bolodurina, I. P.; Parfenov, D. I.

    2017-10-01

    The goal of our investigation is optimization of network work in virtual data center. The advantage of modern infrastructure virtualization lies in the possibility to use software-defined networks. However, the existing optimization of algorithmic solutions does not take into account specific features working with multiple classes of virtual network functions. The current paper describes models characterizing the basic structures of object of virtual data center. They including: a level distribution model of software-defined infrastructure virtual data center, a generalized model of a virtual network function, a neural network model of the identification of virtual network functions. We also developed an efficient algorithm for the optimization technology of containerization of virtual network functions in virtual data center. We propose an efficient algorithm for placing virtual network functions. In our investigation we also generalize the well renowned heuristic and deterministic algorithms of Karmakar-Karp.

  5. Multidisciplinary Modeling Software for Analysis, Design, and Optimization of HRRLS Vehicles

    NASA Technical Reports Server (NTRS)

    Spradley, Lawrence W.; Lohner, Rainald; Hunt, James L.

    2011-01-01

    The concept for Highly Reliable Reusable Launch Systems (HRRLS) under the NASA Hypersonics project is a two-stage-to-orbit, horizontal-take-off / horizontal-landing, (HTHL) architecture with an air-breathing first stage. The first stage vehicle is a slender body with an air-breathing propulsion system that is highly integrated with the airframe. The light weight slender body will deflect significantly during flight. This global deflection affects the flow over the vehicle and into the engine and thus the loads and moments on the vehicle. High-fidelity multi-disciplinary analyses that accounts for these fluid-structures-thermal interactions are required to accurately predict the vehicle loads and resultant response. These predictions of vehicle response to multi physics loads, calculated with fluid-structural-thermal interaction, are required in order to optimize the vehicle design over its full operating range. This contract with ResearchSouth addresses one of the primary objectives of the Vehicle Technology Integration (VTI) discipline: the development of high-fidelity multi-disciplinary analysis and optimization methods and tools for HRRLS vehicles. The primary goal of this effort is the development of an integrated software system that can be used for full-vehicle optimization. This goal was accomplished by: 1) integrating the master code, FEMAP, into the multidiscipline software network to direct the coupling to assure accurate fluid-structure-thermal interaction solutions; 2) loosely-coupling the Euler flow solver FEFLO to the available and proven aeroelasticity and large deformation (FEAP) code; 3) providing a coupled Euler-boundary layer capability for rapid viscous flow simulation; 4) developing and implementing improved Euler/RANS algorithms into the FEFLO CFD code to provide accurate shock capturing, skin friction, and heat-transfer predictions for HRRLS vehicles in hypersonic flow, 5) performing a Reynolds-averaged Navier-Stokes computation on an HRRLS configuration; 6) integrating the RANS solver with the FEAP code for coupled fluid-structure-thermal capability; and 7) integrating the existing NASA SRGULL propulsion flow path prediction software with the FEFLO software for quasi-3D propulsion flow path predictions, 8) improving and integrating into the network, an existing adjoint-based design optimization code.

  6. Effect of the mandible on mouthguard measurements of head kinematics.

    PubMed

    Kuo, Calvin; Wu, Lyndia C; Hammoor, Brad T; Luck, Jason F; Cutcliffe, Hattie C; Lynall, Robert C; Kait, Jason R; Campbell, Kody R; Mihalik, Jason P; Bass, Cameron R; Camarillo, David B

    2016-06-14

    Wearable sensors are becoming increasingly popular for measuring head motions and detecting head impacts. Many sensors are worn on the skin or in headgear and can suffer from motion artifacts introduced by the compliance of soft tissue or decoupling of headgear from the skull. The instrumented mouthguard is designed to couple directly to the upper dentition, which is made of hard enamel and anchored in a bony socket by stiff ligaments. This gives the mouthguard superior coupling to the skull compared with other systems. However, multiple validation studies have yielded conflicting results with respect to the mouthguard׳s head kinematics measurement accuracy. Here, we demonstrate that imposing different constraints on the mandible (lower jaw) can alter mouthguard kinematic accuracy in dummy headform testing. In addition, post mortem human surrogate tests utilizing the worst-case unconstrained mandible condition yield 40% and 80% normalized root mean square error in angular velocity and angular acceleration respectively. These errors can be modeled using a simple spring-mass system in which the soft mouthguard material near the sensors acts as a spring and the mandible as a mass. However, the mouthguard can be designed to mitigate these disturbances by isolating sensors from mandible loads, improving accuracy to below 15% normalized root mean square error in all kinematic measures. Thus, while current mouthguards would suffer from measurement errors in the worst-case unconstrained mandible condition, future mouthguards should be designed to account for these disturbances and future validation testing should include unconstrained mandibles to ensure proper accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Oxygen Desaturation Index Estimation through Unconstrained Cardiac Sympathetic Activity Assessment Using Three Ballistocardiographic Systems.

    PubMed

    Jung, Da Woon; Hwang, Su Hwan; Lee, Yu Jin; Jeong, Do-Un; Park, Kwang Suk

    2016-01-01

    Nocturnal hypoxemia, characterized by abnormally low oxygen saturation levels in arterial blood during sleep, is a significant feature of various pathological conditions. The oxygen desaturation index, commonly used to evaluate the nocturnal hypoxemia severity, is acquired using nocturnal pulse oximetry that requires the overnight wear of a pulse oximeter probe. This study aimed to suggest a method for the unconstrained estimation of the oxygen desaturation index. We hypothesized that the severity of nocturnal hypoxemia would be positively associated with cardiac sympathetic activation during sleep. Unconstrained heart rate variability monitoring was conducted using three different ballistocardiographic systems to assess cardiac sympathetic activity. Overnight polysomnographic and ballistocardiographic recording pairs were collected from the 20 non-nocturnal hypoxemia (oxygen desaturation index <5 events/h) subjects and the 76 nocturnal hypoxemia patients. Among the 96 recording pairs, 48 were used as training data and the remaining 48 as test data. The regression analysis, performed using the low-frequency component of heart rate variability, exhibited a root mean square error of 3.33 events/h between the estimates and the reference values of the oxygen desaturation index. The nocturnal hypoxemia diagnostic performance produced by our method was presented with an average accuracy of 96.5% at oxygen desaturation index cutoffs of ≥5, 15, and 30 events/h. Our method has the potential to serve as a complementary measure against the accidental slip-out of a pulse oximeter probe during nocturnal pulse oximetry. The independent application of our method could facilitate home-based long-term oxygen desaturation index monitoring. © 2016 S. Karger AG, Basel.

  8. An apparent contradiction: increasing variability to achieve greater precision?

    PubMed

    Rosenblatt, Noah J; Hurt, Christopher P; Latash, Mark L; Grabiner, Mark D

    2014-02-01

    To understand the relationship between variability of foot placement in the frontal plane and stability of gait patterns, we explored how constraining mediolateral foot placement during walking affects the structure of kinematic variance in the lower-limb configuration space during the swing phase of gait. Ten young subjects walked under three conditions: (1) unconstrained (normal walking), (2) constrained (walking overground with visual guides for foot placement to achieve the measured unconstrained step width) and, (3) beam (walking on elevated beams spaced to achieve the measured unconstrained step width). The uncontrolled manifold analysis of the joint configuration variance was used to quantify two variance components, one that did not affect the mediolateral trajectory of the foot in the frontal plane ("good variance") and one that affected this trajectory ("bad variance"). Based on recent studies, we hypothesized that across conditions (1) the index of the synergy stabilizing the mediolateral trajectory of the foot (the normalized difference between the "good variance" and "bad variance") would systematically increase and (2) the changes in the synergy index would be associated with a disproportionate increase in the "good variance." Both hypotheses were confirmed. We conclude that an increase in the "good variance" component of the joint configuration variance may be an effective method of ensuring high stability of gait patterns during conditions requiring increased control of foot placement, particularly if a postural threat is present. Ultimately, designing interventions that encourage a larger amount of "good variance" may be a promising method of improving stability of gait patterns in populations such as older adults and neurological patients.

  9. Mobile EEG on the bike: disentangling attentional and physical contributions to auditory attention tasks

    NASA Astrophysics Data System (ADS)

    Zink, Rob; Hunyadi, Borbála; Van Huffel, Sabine; De Vos, Maarten

    2016-08-01

    Objective. In the past few years there has been a growing interest in studying brain functioning in natural, real-life situations. Mobile EEG allows to study the brain in real unconstrained environments but it faces the intrinsic challenge that it is impossible to disentangle observed changes in brain activity due to increase in cognitive demands by the complex natural environment or due to the physical involvement. In this work we aim to disentangle the influence of cognitive demands and distractions that arise from such outdoor unconstrained recordings. Approach. We evaluate the ERP and single trial characteristics of a three-class auditory oddball paradigm recorded in outdoor scenario’s while peddling on a fixed bike or biking freely around. In addition we also carefully evaluate the trial specific motion artifacts through independent gyro measurements and control for muscle artifacts. Main results. A decrease in P300 amplitude was observed in the free biking condition as compared to the fixed bike conditions. Above chance P300 single-trial classification in highly dynamic real life environments while biking outdoors was achieved. Certain significant artifact patterns were identified in the free biking condition, but neither these nor the increase in movement (as derived from continuous gyrometer measurements) can explain the differences in classification accuracy and P300 waveform differences with full clarity. The increased cognitive load in real-life scenarios is shown to play a major role in the observed differences. Significance. Our findings suggest that auditory oddball results measured in natural real-life scenarios are influenced mainly by increased cognitive load due to being in an unconstrained environment.

  10. Mobile EEG on the bike: disentangling attentional and physical contributions to auditory attention tasks.

    PubMed

    Zink, Rob; Hunyadi, Borbála; Huffel, Sabine Van; Vos, Maarten De

    2016-08-01

    In the past few years there has been a growing interest in studying brain functioning in natural, real-life situations. Mobile EEG allows to study the brain in real unconstrained environments but it faces the intrinsic challenge that it is impossible to disentangle observed changes in brain activity due to increase in cognitive demands by the complex natural environment or due to the physical involvement. In this work we aim to disentangle the influence of cognitive demands and distractions that arise from such outdoor unconstrained recordings. We evaluate the ERP and single trial characteristics of a three-class auditory oddball paradigm recorded in outdoor scenario's while peddling on a fixed bike or biking freely around. In addition we also carefully evaluate the trial specific motion artifacts through independent gyro measurements and control for muscle artifacts. A decrease in P300 amplitude was observed in the free biking condition as compared to the fixed bike conditions. Above chance P300 single-trial classification in highly dynamic real life environments while biking outdoors was achieved. Certain significant artifact patterns were identified in the free biking condition, but neither these nor the increase in movement (as derived from continuous gyrometer measurements) can explain the differences in classification accuracy and P300 waveform differences with full clarity. The increased cognitive load in real-life scenarios is shown to play a major role in the observed differences. Our findings suggest that auditory oddball results measured in natural real-life scenarios are influenced mainly by increased cognitive load due to being in an unconstrained environment.

  11. Mind your step: metabolic energy cost while walking an enforced gait pattern.

    PubMed

    Wezenberg, D; de Haan, A; van Bennekom, C A M; Houdijk, H

    2011-04-01

    The energy cost of walking could be attributed to energy related to the walking movement and energy related to balance control. In order to differentiate between both components we investigated the energy cost of walking an enforced step pattern, thereby perturbing balance while the walking movement is preserved. Nine healthy subjects walked three times at comfortable walking speed on an instrumented treadmill. The first trial consisted of unconstrained walking. In the next two trials, subject walked while following a step pattern projected on the treadmill. The steps projected were either composed of the averaged step characteristics (periodic trial), or were an exact copy including the variability of the steps taken while walking unconstrained (variable trial). Metabolic energy cost was assessed and center of pressure profiles were analyzed to determine task performance, and to gain insight into the balance control strategies applied. Results showed that the metabolic energy cost was significantly higher in both the periodic and variable trial (8% and 13%, respectively) compared to unconstrained walking. The variation in center of pressure trajectories during single limb support was higher when a gait pattern was enforced, indicating a more active ankle strategy. The increased metabolic energy cost could originate from increased preparatory muscle activation to ensure proper foot placement and a more active ankle strategy to control for lateral balance. These results entail that metabolic energy cost of walking can be influenced significantly by control strategies that do not necessary alter global gait characteristics. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Use of multilevel modeling for determining optimal parameters of heat supply systems

    NASA Astrophysics Data System (ADS)

    Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.

    2017-07-01

    The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Siberian Branch, Russian Academy of Sciences). These methods enable us to determine optimal parameters of various types of piping systems due to flexible adaptability of the calculation procedure to intricate nonlinear mathematical models describing features of used equipment items and methods of their construction and operation. The new and most significant results achieved in developing methodological support and software for finding optimal parameters of complex heat supply systems are presented: a new procedure for solving the problem based on multilevel decomposition of a heat network model that makes it possible to proceed from the initial problem to a set of interrelated, less cumbersome subproblems with reduced dimensionality; a new algorithm implementing the method of multicircuit optimization and focused on the calculation of a hierarchical model of a heat supply system; the SOSNA software system for determining optimum parameters of intricate heat-supply systems and implementing the developed methodological foundation. The proposed procedure and algorithm enable us to solve engineering problems of finding the optimal parameters of multicircuit heat supply systems having large (real) dimensionality, and are applied in solving urgent problems related to the optimal development and reconstruction of these systems. The developed methodological foundation and software can be used for designing heat supply systems in the Central and the Admiralty regions in St. Petersburg, the city of Bratsk, and the Magistral'nyi settlement.

  13. Load balancing and closed chain multiple arm control

    NASA Technical Reports Server (NTRS)

    Kreutz, Kenneth; Lokshin, Anatole

    1988-01-01

    The authors give the general dynamical equations for several rigid link manipulators rigidly grasping a commonly held rigid object. It is shown that the number of arm-configuration degrees of freedom lost due to imposing the closed-loop kinematic constraints is the same as the number of degrees of freedom gained for controlling the internal forces of the closed-chain system. This number is equal to the dimension of the kernel of the Jacobian operator which transforms contact forces to the net forces acting on the held object, and it is shown that this kernel can be identified with the subspace of controllable internal forces of the closed-chain system. Control of these forces makes it possible to regulate the grasping forces imparted to the held object or to control the load taken by each arm. It is shown that the internal forces can be influenced without affecting the control of the configuration degrees of freedom. Control laws of the feedback linearization type are shown to be useful for controlling the location and attitude of a frame fixed with respect to the held object, while simultaneously controlling the internal forces of the closed-chain system. Force feedback can be used to linearize and control the system even when the held object has unknown mass properties. If saturation effects are ignored, an unconstrained quadratic optimization can be performed to distribute the load optimally among the joint actuators.

  14. Interconnection-wide hour-ahead scheduling in the presence of intermittent renewables and demand response: A surplus maximizing approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behboodi, Sahand; Chassin, David P.; Djilali, Ned

    This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less

  15. A hybrid optimization algorithm to explore atomic configurations of TiO 2 nanoparticles

    DOE PAGES

    Inclan, Eric J.; Geohegan, David B.; Yoon, Mina

    2017-10-17

    Here in this paper we present a hybrid algorithm comprised of differential evolution, coupled with the Broyden–Fletcher–Goldfarb–Shanno quasi-Newton optimization algorithm, for the purpose of identifying a broad range of (meta)stable Ti nO 2n nanoparticles, as an example system, described by Buckingham interatomic potential. The potential and its gradient are modified to be piece-wise continuous to enable use of these continuous-domain, unconstrained algorithms, thereby improving compatibility. To measure computational effectiveness a regression on known structures is used. This approach defines effectiveness as the ability of an algorithm to produce a set of structures whose energy distribution follows the regression as themore » number of Ti nO 2n increases such that the shape of the distribution is consistent with the algorithm’s stated goals. Our calculation demonstrates that the hybrid algorithm finds global minimum configurations more effectively than the differential evolution algorithms, widely employed in the field of materials science. Specifically, the hybrid algorithm is shown to reproduce the global minimum energy structures reported in the literature up to n = 5, and retains good agreement with the regression up to n = 25. For 25 < n < 100, where literature structures are unavailable, the hybrid effectively obtains structures that are in lower energies per TiO 2 unit as the system size increases.« less

  16. Prediction of human gait trajectories during the SSP using a neuromusculoskeletal modeling: A challenge for parametric optimization.

    PubMed

    Seyed, Mohammadali Rahmati; Mostafa, Rostami; Borhan, Beigzadeh

    2018-04-27

    The parametric optimization techniques have been widely employed to predict human gait trajectories; however, their applications to reveal the other aspects of gait are questionable. The aim of this study is to investigate whether or not the gait prediction model is able to justify the movement trajectories for the higher average velocities. A planar, seven-segment model with sixteen muscle groups was used to represent human neuro-musculoskeletal dynamics. At first, the joint angles, ground reaction forces (GRFs) and muscle activations were predicted and validated for normal average velocity (1.55 m/s) in the single support phase (SSP) by minimizing energy expenditure, which is subject to the non-linear constraints of the gait. The unconstrained system dynamics of extended inverse dynamics (USDEID) approach was used to estimate muscle activations. Then by scaling time and applying the same procedure, the movement trajectories were predicted for higher average velocities (from 2.07 m/s to 4.07 m/s) and compared to the pattern of movement with fast walking speed. The comparison indicated a high level of compatibility between the experimental and predicted results, except for the vertical position of the center of gravity (COG). It was concluded that the gait prediction model can be effectively used to predict gait trajectories for higher average velocities.

  17. Constrained Versions of DEDICOM for Use in Unsupervised Part-Of-Speech Tagging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunlavy, Daniel; Chew, Peter A.

    This reports describes extensions of DEDICOM (DEcomposition into DIrectional COMponents) data models [3] that incorporate bound and linear constraints. The main purpose of these extensions is to investigate the use of improved data models for unsupervised part-of-speech tagging, as described by Chew et al. [2]. In that work, a single domain, two-way DEDICOM model was computed on a matrix of bigram fre- quencies of tokens in a corpus and used to identify parts-of-speech as an unsupervised approach to that problem. An open problem identi ed in that work was the com- putation of a DEDICOM model that more closely resembledmore » the matrices used in a Hidden Markov Model (HMM), speci cally through post-processing of the DEDICOM factor matrices. The work reported here consists of the description of several models that aim to provide a direct solution to that problem and a way to t those models. The approach taken here is to incorporate the model requirements as bound and lin- ear constrains into the DEDICOM model directly and solve the data tting problem as a constrained optimization problem. This is in contrast to the typical approaches in the literature, where the DEDICOM model is t using unconstrained optimization approaches, and model requirements are satis ed as a post-processing step.« less

  18. Using Grey Wolf Algorithm to Solve the Capacitated Vehicle Routing Problem

    NASA Astrophysics Data System (ADS)

    Korayem, L.; Khorsid, M.; Kassem, S. S.

    2015-05-01

    The capacitated vehicle routing problem (CVRP) is a class of the vehicle routing problems (VRPs). In CVRP a set of identical vehicles having fixed capacities are required to fulfill customers' demands for a single commodity. The main objective is to minimize the total cost or distance traveled by the vehicles while satisfying a number of constraints, such as: the capacity constraint of each vehicle, logical flow constraints, etc. One of the methods employed in solving the CVRP is the cluster-first route-second method. It is a technique based on grouping of customers into a number of clusters, where each cluster is served by one vehicle. Once clusters are formed, a route determining the best sequence to visit customers is established within each cluster. The recently bio-inspired grey wolf optimizer (GWO), introduced in 2014, has proven to be efficient in solving unconstrained, as well as, constrained optimization problems. In the current research, our main contributions are: combining GWO with the traditional K-means clustering algorithm to generate the ‘K-GWO’ algorithm, deriving a capacitated version of the K-GWO algorithm by incorporating a capacity constraint into the aforementioned algorithm, and finally, developing 2 new clustering heuristics. The resulting algorithm is used in the clustering phase of the cluster-first route-second method to solve the CVR problem. The algorithm is tested on a number of benchmark problems with encouraging results.

  19. Interconnection-wide hour-ahead scheduling in the presence of intermittent renewables and demand response: A surplus maximizing approach

    DOE PAGES

    Behboodi, Sahand; Chassin, David P.; Djilali, Ned; ...

    2016-12-23

    This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less

  20. Dynamic Modeling, Model-Based Control, and Optimization of Solid Oxide Fuel Cells

    NASA Astrophysics Data System (ADS)

    Spivey, Benjamin James

    2011-07-01

    Solid oxide fuel cells are a promising option for distributed stationary power generation that offers efficiencies ranging from 50% in stand-alone applications to greater than 80% in cogeneration. To advance SOFC technology for widespread market penetration, the SOFC should demonstrate improved cell lifetime and load-following capability. This work seeks to improve lifetime through dynamic analysis of critical lifetime variables and advanced control algorithms that permit load-following while remaining in a safe operating zone based on stress analysis. Control algorithms typically have addressed SOFC lifetime operability objectives using unconstrained, single-input-single-output control algorithms that minimize thermal transients. Existing SOFC controls research has not considered maximum radial thermal gradients or limits on absolute temperatures in the SOFC. In particular, as stress analysis demonstrates, the minimum cell temperature is the primary thermal stress driver in tubular SOFCs. This dissertation presents a dynamic, quasi-two-dimensional model for a high-temperature tubular SOFC combined with ejector and prereformer models. The model captures dynamics of critical thermal stress drivers and is used as the physical plant for closed-loop control simulations. A constrained, MIMO model predictive control algorithm is developed and applied to control the SOFC. Closed-loop control simulation results demonstrate effective load-following, constraint satisfaction for critical lifetime variables, and disturbance rejection. Nonlinear programming is applied to find the optimal SOFC size and steady-state operating conditions to minimize total system costs.

  1. On process optimization considering LCA methodology.

    PubMed

    Pieragostini, Carla; Mussati, Miguel C; Aguirre, Pío

    2012-04-15

    The goal of this work is to research the state-of-the-art in process optimization techniques and tools based on LCA, focused in the process engineering field. A collection of methods, approaches, applications, specific software packages, and insights regarding experiences and progress made in applying the LCA methodology coupled to optimization frameworks is provided, and general trends are identified. The "cradle-to-gate" concept to define the system boundaries is the most used approach in practice, instead of the "cradle-to-grave" approach. Normally, the relationship between inventory data and impact category indicators is linearly expressed by the characterization factors; then, synergic effects of the contaminants are neglected. Among the LCIA methods, the eco-indicator 99, which is based on the endpoint category and the panel method, is the most used in practice. A single environmental impact function, resulting from the aggregation of environmental impacts, is formulated as the environmental objective in most analyzed cases. SimaPro is the most used software for LCA applications in literature analyzed. The multi-objective optimization is the most used approach for dealing with this kind of problems, where the ε-constraint method for generating the Pareto set is the most applied technique. However, a renewed interest in formulating a single economic objective function in optimization frameworks can be observed, favored by the development of life cycle cost software and progress made in assessing costs of environmental externalities. Finally, a trend to deal with multi-period scenarios into integrated LCA-optimization frameworks can be distinguished providing more accurate results upon data availability. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Multidisciplinary Optimization Branch Experience Using iSIGHT Software

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Korte, J. J.; Dunn, H. J.; Salas, A. O.

    1999-01-01

    The Multidisciplinary Optimization (MDO) Branch at NASA Langley Research Center is investigating frameworks for supporting multidisciplinary analysis and optimization research. An optimization framework call improve the design process while reducing time and costs. A framework provides software and system services to integrate computational tasks and allows the researcher to concentrate more on the application and less on the programming details. A framework also provides a common working environment and a full range of optimization tools, and so increases the productivity of multidisciplinary research teams. Finally, a framework enables staff members to develop applications for use by disciplinary experts in other organizations. Since the release of version 4.0, the MDO Branch has gained experience with the iSIGHT framework developed by Engineous Software, Inc. This paper describes experiences with four aerospace applications: (1) reusable launch vehicle sizing, (2) aerospike nozzle design, (3) low-noise rotorcraft trajectories, and (4) acoustic liner design. All applications have been successfully tested using the iSIGHT framework, except for the aerospike nozzle problem, which is in progress. Brief overviews of each problem are provided. The problem descriptions include the number and type of disciplinary codes, as well as all estimate of the multidisciplinary analysis execution time. In addition, the optimization methods, objective functions, design variables, and design constraints are described for each problem. Discussions on the experience gained and lessons learned are provided for each problem. These discussions include the advantages and disadvantages of using the iSIGHT framework for each case as well as the ease of use of various advanced features. Potential areas of improvement are identified.

  3. In Praise of Ignorance

    ERIC Educational Resources Information Center

    Formica, Piero

    2014-01-01

    In this article Piero Formica examines the difference between incremental and revolutionary innovation, distinguishing between the constrained "path finders" and the unconstrained "path creators". He argues that an acceptance of "ignorance" and a willingness to venture into the unknown are critical elements in…

  4. A proportional control scheme for high density force myography.

    PubMed

    Belyea, Alexander T; Englehart, Kevin B; Scheme, Erik J

    2018-08-01

    Force myography (FMG) has been shown to be a potentially higher accuracy alternative to electromyography for pattern recognition based prosthetic control. Classification accuracy, however, is just one factor that affects the usability of a control system. Others, like the ability to start and stop, to coordinate dynamic movements, and to control the velocity of the device through some proportional control scheme can be of equal importance. To impart effective fine control using FMG-based pattern recognition, it is important that a method of controlling the velocity of each motion be developed. In this work force myography data were collected from 14 able bodied participants and one amputee participant as they performed a set of wrist and hand motions. The offline proportional control performance of a standard mean signal amplitude approach and a proposed regression-based alternative was compared. The impact of providing feedback during training, as well as the use of constrained or unconstrained hand and wrist contractions, were also evaluated. It is shown that the commonly used mean of rectified channel amplitudes approach commonly employed with electromyography does not translate to force myography. The proposed class-based regression proportional control approach is shown significantly outperform this standard approach (ρ  <  0.001), yielding a R 2 correlation coefficients of 0.837 and 0.830 for constrained and unconstrained forearm contractions, respectively for able bodied participants. No significant difference (ρ  =  0.693) was found in R 2 performance when feedback was provided during training or not. The amputee subject achieved a classification accuracy of 83.4%  ±  3.47% demonstrating the ability to distinguish contractions well with FMG. In proportional control the amputee participant achieved an R 2 of of 0.375 for regression based proportional control during unconstrained contractions. This is lower than the unconstrained case for able-bodied subjects for this particular amputee, possibly due to difficultly in visualizing contraction level modulation without feedback. This may be remedied in the use of a prosthetic limb that would provide real-time feedback in the form of device speed. A novel class-specific regression-based approach is proposed for multi-class control is described and shown to provide an effective means of providing FMG-based proportional control.

  5. Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 1

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr. (Compiler)

    1989-01-01

    Control/Structures Integration program software needs, computer aided control engineering for flexible spacecraft, computer aided design, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software for flexible structures and robots are among the topics discussed.

  6. Advances in Strapdown Inertial Systems. Lecture Series Held in Athens, Greece on 14-15 May 1984, in Rome, Italy on 17-18 May 1984 and in Copenhagen, Denmark on 21-22 May 1984

    DTIC Science & Technology

    1984-04-01

    software are required. Ported air cooling is provided in accordan-4 oith WKIM 600 Level 2 and Adequately supports the pow. dissipation (approxiimately 100... software multiplication with simple shifting operations in order to optimize operating speed. Finally, program development software for microprocessors...requiremuents and that the software was exhaustively verified and validated prior to initiation of flight testing will be describ- ed. A special flight

  7. Grayscale Optical Correlator Workbench

    NASA Technical Reports Server (NTRS)

    Hanan, Jay; Zhou, Hanying; Chao, Tien-Hsin

    2006-01-01

    Grayscale Optical Correlator Workbench (GOCWB) is a computer program for use in automatic target recognition (ATR). GOCWB performs ATR with an accurate simulation of a hardware grayscale optical correlator (GOC). This simulation is performed to test filters that are created in GOCWB. Thus, GOCWB can be used as a stand-alone ATR software tool or in combination with GOC hardware for building (target training), testing, and optimization of filters. The software is divided into three main parts, denoted filter, testing, and training. The training part is used for assembling training images as input to a filter. The filter part is used for combining training images into a filter and optimizing that filter. The testing part is used for testing new filters and for general simulation of GOC output. The current version of GOCWB relies on the mathematical software tools from MATLAB binaries for performing matrix operations and fast Fourier transforms. Optimization of filters is based on an algorithm, known as OT-MACH, in which variables specified by the user are parameterized and the best filter is selected on the basis of an average result for correct identification of targets in multiple test images.

  8. High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software

    PubMed Central

    Fabregat-Traver, Diego; Sharapov, Sodbo Zh.; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo

    2014-01-01

    To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the ’omics’ context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL. PMID:25717363

  9. High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software.

    PubMed

    Fabregat-Traver, Diego; Sharapov, Sodbo Zh; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo

    2014-01-01

    To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the 'omics' context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL.

  10. Advanced texture filtering: a versatile framework for reconstructing multi-dimensional image data on heterogeneous architectures

    NASA Astrophysics Data System (ADS)

    Zellmann, Stefan; Percan, Yvonne; Lang, Ulrich

    2015-01-01

    Reconstruction of 2-d image primitives or of 3-d volumetric primitives is one of the most common operations performed by the rendering components of modern visualization systems. Because this operation is often aided by GPUs, reconstruction is typically restricted to first-order interpolation. With the advent of in situ visualization, the assumption that rendering algorithms are in general executed on GPUs is however no longer adequate. We thus propose a framework that provides versatile texture filtering capabilities: up to third-order reconstruction using various types of cubic filtering and interpolation primitives; cache-optimized algorithms that integrate seamlessly with GPGPU rendering or with software rendering that was optimized for cache-friendly "Structure of Array" (SoA) access patterns; a memory management layer (MML) that gracefully hides the complexities of extra data copies necessary for memory access optimizations such as swizzling, for rendering on GPGPUs, or for reconstruction schemes that rely on pre-filtered data arrays. We prove the effectiveness of our software architecture by integrating it into and validating it using the open source direct volume rendering (DVR) software DeskVOX.

  11. A Polyhedral Outer-approximation, Dynamic-discretization optimization solver, 1.x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, Rusell; Nagarajan, Harsha; Sundar, Kaarthik

    2017-09-25

    In this software, we implement an adaptive, multivariate partitioning algorithm for solving mixed-integer nonlinear programs (MINLP) to global optimality. The algorithm combines ideas that exploit the structure of convex relaxations to MINLPs and bound tightening procedures

  12. An NAFP Project: Use of Object Oriented Methodologies and Design Patterns to Refactor Software Design

    NASA Technical Reports Server (NTRS)

    Shaykhian, Gholam Ali; Baggs, Rhoda

    2007-01-01

    In the early problem-solution era of software programming, functional decompositions were mainly used to design and implement software solutions. In functional decompositions, functions and data are introduced as two separate entities during the design phase, and are followed as such in the implementation phase. Functional decompositions make use of refactoring through optimizing the algorithms, grouping similar functionalities into common reusable functions, and using abstract representations of data where possible; all these are done during the implementation phase. This paper advocates the usage of object-oriented methodologies and design patterns as the centerpieces of refactoring software solutions. Refactoring software is a method of changing software design while explicitly preserving its external functionalities. The combined usage of object-oriented methodologies and design patterns to refactor should also benefit the overall software life cycle cost with improved software.

  13. Automating the design of scientific computing software

    NASA Technical Reports Server (NTRS)

    Kant, Elaine

    1992-01-01

    SINAPSE is a domain-specific software design system that generates code from specifications of equations and algorithm methods. This paper describes the system's design techniques (planning in a space of knowledge-based refinement and optimization rules), user interaction style (user has option to control decision making), and representation of knowledge (rules and objects). It also summarizes how the system knowledge has evolved over time and suggests some issues in building software design systems to facilitate reuse.

  14. Hybrid Optimization Parallel Search PACKage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-11-10

    HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework provides a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, amore » useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less

  15. Path Searching Based Fault Automated Recovery Scheme for Distribution Grid with DG

    NASA Astrophysics Data System (ADS)

    Xia, Lin; Qun, Wang; Hui, Xue; Simeng, Zhu

    2016-12-01

    Applying the method of path searching based on distribution network topology in setting software has a good effect, and the path searching method containing DG power source is also applicable to the automatic generation and division of planned islands after the fault. This paper applies path searching algorithm in the automatic division of planned islands after faults: starting from the switch of fault isolation, ending in each power source, and according to the line load that the searching path traverses and the load integrated by important optimized searching path, forming optimized division scheme of planned islands that uses each DG as power source and is balanced to local important load. Finally, COBASE software and distribution network automation software applied are used to illustrate the effectiveness of the realization of such automatic restoration program.

  16. A New Wavelength Optimization and Energy-Saving Scheme Based on Network Coding in Software-Defined WDM-PON Networks

    NASA Astrophysics Data System (ADS)

    Ren, Danping; Wu, Shanshan; Zhang, Lijing

    2016-09-01

    In view of the characteristics of the global control and flexible monitor of software-defined networks (SDN), we proposes a new optical access network architecture dedicated to Wavelength Division Multiplexing-Passive Optical Network (WDM-PON) systems based on SDN. The network coding (NC) technology is also applied into this architecture to enhance the utilization of wavelength resource and reduce the costs of light source. Simulation results show that this scheme can optimize the throughput of the WDM-PON network, greatly reduce the system time delay and energy consumption.

  17. Method of experimental and calculation determination of dissipative properties of carbon

    NASA Astrophysics Data System (ADS)

    Kazakova, Olga I.; Smolin, Igor Yu.; Bezmozgiy, Iosif M.

    2017-12-01

    This paper describes the process of definition of relations between the damping ratio and strain/state levels in a material. For these purposes, the experimental-calculation approach was applied. The experimental research was performed on plane composite specimens. The tests were accompanied by finite element modeling using the ANSYS software. Optimization was used as a tool for FEM property setting and for finding the above-mentioned relations. A difference between the calculation and experimental results was accepted as objective functions of this optimization. The optimization cycle was implemented using the pSeven DATADVANCE software platform. The developed approach makes it possible to determine the relations between the damping ratio and strain/state levels in the material, which can be used for computer modeling of the structure response under dynamic loading.

  18. 47 CFR 1.2202 - Competitive bidding design options.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Section 1.2202 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants...) Procedures that utilize mathematical computer optimization software, such as integer programming, to evaluate... evaluating bids using a ranking based on specified factors. (B) Procedures that combine computer optimization...

  19. Numerical Optimization Algorithms and Software for Systems Biology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saunders, Michael

    2013-02-02

    The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.

  20. Evaluation of the Red Blood Cell Advanced Software Application on the CellaVision DM96.

    PubMed

    Criel, M; Godefroid, M; Deckers, B; Devos, H; Cauwelier, B; Emmerechts, J

    2016-08-01

    The CellaVision Advanced Red Blood Cell (RBC) Software Application is a new software for advanced morphological analysis of RBCs on a digital microscopy system. Upon automated precharacterization into 21 categories, the software offers the possibility of reclassification of RBCs by the operator. We aimed to define the optimal cut-off to detect morphological RBC abnormalities and to evaluate the precharacterization performance of this software. Thirty-eight blood samples of healthy donors and sixty-eight samples of hospitalized patients were analyzed. Different methodologies to define a cut-off between negativity and positivity were used. Sensitivity and specificity were calculated according to these different cut-offs using the manual microscopic method as the gold standard. Imprecision was assessed by measuring analytical within-run and between-run variability and by measuring between-observer variability. By optimizing the cut-off between negativity and positivity, sensitivities exceeded 80% for 'critical' RBC categories (target cells, tear drop cells, spherocytes, sickle cells, and parasites), while specificities exceeded 80% for the other RBC morphological categories. Results of within-run, between-run, and between-observer variabilities were all clinically acceptable. The CellaVision Advanced RBC Software Application is an easy-to-use software that helps to detect most RBC morphological abnormalities in a sensitive and specific way without increasing work load, provided the proper cut-offs are chosen. However, evaluation of the images by an experienced observer remains necessary. © 2016 John Wiley & Sons Ltd.

Top