Sample records for mathematical optimization techniques

  1. Research on an augmented Lagrangian penalty function algorithm for nonlinear programming

    NASA Technical Reports Server (NTRS)

    Frair, L.

    1978-01-01

    The augmented Lagrangian (ALAG) Penalty Function Algorithm for optimizing nonlinear mathematical models is discussed. The mathematical models of interest are deterministic in nature and finite dimensional optimization is assumed. A detailed review of penalty function techniques in general and the ALAG technique in particular is presented. Numerical experiments are conducted utilizing a number of nonlinear optimization problems to identify an efficient ALAG Penalty Function Technique for computer implementation.

  2. How to mathematically optimize drug regimens using optimal control.

    PubMed

    Moore, Helen

    2018-02-01

    This article gives an overview of a technique called optimal control, which is used to optimize real-world quantities represented by mathematical models. I include background information about the historical development of the technique and applications in a variety of fields. The main focus here is the application to diseases and therapies, particularly the optimization of combination therapies, and I highlight several such examples. I also describe the basic theory of optimal control, and illustrate each of the steps with an example that optimizes the doses in a combination regimen for leukemia. References are provided for more complex cases. The article is aimed at modelers working in drug development, who have not used optimal control previously. My goal is to make this technique more accessible in the biopharma community.

  3. Mathematical Optimization Techniques

    NASA Technical Reports Server (NTRS)

    Bellman, R. (Editor)

    1963-01-01

    The papers collected in this volume were presented at the Symposium on Mathematical Optimization Techniques held in the Santa Monica Civic Auditorium, Santa Monica, California, on October 18-20, 1960. The objective of the symposium was to bring together, for the purpose of mutual education, mathematicians, scientists, and engineers interested in modern optimization techniques. Some 250 persons attended. The techniques discussed included recent developments in linear, integer, convex, and dynamic programming as well as the variational processes surrounding optimal guidance, flight trajectories, statistical decisions, structural configurations, and adaptive control systems. The symposium was sponsored jointly by the University of California, with assistance from the National Science Foundation, the Office of Naval Research, the National Aeronautics and Space Administration, and The RAND Corporation, through Air Force Project RAND.

  4. Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.

    1980-01-01

    Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.

  5. The analytical representation of viscoelastic material properties using optimization techniques

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1993-01-01

    This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.

  6. Near-earth orbital guidance and remote sensing

    NASA Technical Reports Server (NTRS)

    Powers, W. F.

    1972-01-01

    The curriculum of a short course in remote sensing and parameter optimization is presented. The subjects discussed are: (1) basics of remote sensing and the user community, (2) multivariant spectral analysis, (3) advanced mathematics and physics of remote sensing, (4) the atmospheric environment, (5) imaging sensing, and (6)nonimaging sensing. Mathematical models of optimization techniques are developed.

  7. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  8. Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial

    PubMed Central

    Ibrahim, Ahmed; Alfa, Attahiru

    2017-01-01

    This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes. PMID:28763039

  9. Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial.

    PubMed

    Ibrahim, Ahmed; Alfa, Attahiru

    2017-08-01

    This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes.

  10. Optimization Techniques for Analysis of Biological and Social Networks

    DTIC Science & Technology

    2012-03-28

    analyzing a new metaheuristic technique, variable objective search. 3. Experimentation and application: Implement the proposed algorithms , test and fine...alternative mathematical programming formulations, their theoretical analysis, the development of exact algorithms , and heuristics. Originally, clusters...systematic fashion under a unifying theoretical and algorithmic framework. Optimization, Complex Networks, Social Network Analysis, Computational

  11. Optimal control of a harmonic oscillator: Economic interpretations

    NASA Astrophysics Data System (ADS)

    Janová, Jitka; Hampel, David

    2013-10-01

    Optimal control is a popular technique for modelling and solving the dynamic decision problems in economics. A standard interpretation of the criteria function and Lagrange multipliers in the profit maximization problem is well known. On a particular example, we aim to a deeper understanding of the possible economic interpretations of further mathematical and solution features of the optimal control problem: we focus on the solution of the optimal control problem for harmonic oscillator serving as a model for Phillips business cycle. We discuss the economic interpretations of arising mathematical objects with respect to well known reasoning for these in other problems.

  12. Optimization technique of wavefront coding system based on ZEMAX externally compiled programs

    NASA Astrophysics Data System (ADS)

    Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2016-10-01

    Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.

  13. Method of optimization onboard communication network

    NASA Astrophysics Data System (ADS)

    Platoshin, G. A.; Selvesuk, N. I.; Semenov, M. E.; Novikov, V. M.

    2018-02-01

    In this article the optimization levels of onboard communication network (OCN) are proposed. We defined the basic parameters, which are necessary for the evaluation and comparison of modern OCN, we identified also a set of initial data for possible modeling of the OCN. We also proposed a mathematical technique for implementing the OCN optimization procedure. This technique is based on the principles and ideas of binary programming. It is shown that the binary programming technique allows to obtain an inherently optimal solution for the avionics tasks. An example of the proposed approach implementation to the problem of devices assignment in OCN is considered.

  14. A mathematical model for the generation and control of a pH gradient in an immobilized enzyme system involving acid generation.

    PubMed

    Chen, G; Fournier, R L; Varanasi, S

    1998-02-20

    An optimal pH control technique has been developed for multistep enzymatic synthesis reactions where the optimal pH differs by several units for each step. This technique separates an acidic environment from a basic environment by the hydrolysis of urea within a thin layer of immobilized urease. With this technique, a two-step enzymatic reaction can take place simultaneously, in proximity to each other, and at their respective optimal pH. Because a reaction system involving an acid generation represents a more challenging test of this pH control technique, a number of factors that affect the generation of such a pH gradient are considered in this study. The mathematical model proposed is based on several simplifying assumptions and represents a first attempt to provide an analysis of this complex problem. The results show that, by choosing appropriate parameters, the pH control technique still can generate the desired pH gradient even if there is an acid-generating reaction in the system. Copyright 1998 John Wiley & Sons, Inc.

  15. The optimization problems of CP operation

    NASA Astrophysics Data System (ADS)

    Kler, A. M.; Stepanova, E. L.; Maximov, A. S.

    2017-11-01

    The problem of enhancing energy and economic efficiency of CP is urgent indeed. One of the main methods for solving it is optimization of CP operation. To solve the optimization problems of CP operation, Energy Systems Institute, SB of RAS, has developed a software. The software makes it possible to make optimization calculations of CP operation. The software is based on the techniques and software tools of mathematical modeling and optimization of heat and power installations. Detailed mathematical models of new equipment have been developed in the work. They describe sufficiently accurately the processes that occur in the installations. The developed models include steam turbine models (based on the checking calculation) which take account of all steam turbine compartments and regeneration system. They also enable one to make calculations with regenerative heaters disconnected. The software for mathematical modeling of equipment and optimization of CP operation has been developed. It is based on the technique for optimization of CP operating conditions in the form of software tools and integrates them in the common user interface. The optimization of CP operation often generates the need to determine the minimum and maximum possible total useful electricity capacity of the plant at set heat loads of consumers, i.e. it is necessary to determine the interval on which the CP capacity may vary. The software has been applied to optimize the operating conditions of the Novo-Irkutskaya CP of JSC “Irkutskenergo”. The efficiency of operating condition optimization and the possibility for determination of CP energy characteristics that are necessary for optimization of power system operation are shown.

  16. Optimal Control Inventory Stochastic With Production Deteriorating

    NASA Astrophysics Data System (ADS)

    Affandi, Pardi

    2018-01-01

    In this paper, we are using optimal control approach to determine the optimal rate in production. Most of the inventory production models deal with a single item. First build the mathematical models inventory stochastic, in this model we also assume that the items are in the same store. The mathematical model of the problem inventory can be deterministic and stochastic models. In this research will be discussed how to model the stochastic as well as how to solve the inventory model using optimal control techniques. The main tool in the study problems for the necessary optimality conditions in the form of the Pontryagin maximum principle involves the Hamilton function. So we can have the optimal production rate in a production inventory system where items are subject deterioration.

  17. Disturbance decoupling, decentralized control and the Riccati equation

    NASA Technical Reports Server (NTRS)

    Garzia, M. R.; Loparo, K. A.; Martin, C. F.

    1981-01-01

    The disturbance decoupling and optimal decentralized control problems are looked at using identical mathematical techniques. A statement of the problems and the development of their solution approach is presented. Preliminary results are given for the optimal decentralized control problem.

  18. Strategies for Fermentation Medium Optimization: An In-Depth Review

    PubMed Central

    Singh, Vineeta; Haque, Shafiul; Niwas, Ram; Srivastava, Akansha; Pasupuleti, Mukesh; Tripathi, C. K. M.

    2017-01-01

    Optimization of production medium is required to maximize the metabolite yield. This can be achieved by using a wide range of techniques from classical “one-factor-at-a-time” to modern statistical and mathematical techniques, viz. artificial neural network (ANN), genetic algorithm (GA) etc. Every technique comes with its own advantages and disadvantages, and despite drawbacks some techniques are applied to obtain best results. Use of various optimization techniques in combination also provides the desirable results. In this article an attempt has been made to review the currently used media optimization techniques applied during fermentation process of metabolite production. Comparative analysis of the merits and demerits of various conventional as well as modern optimization techniques have been done and logical selection basis for the designing of fermentation medium has been given in the present review. Overall, this review will provide the rationale for the selection of suitable optimization technique for media designing employed during the fermentation process of metabolite production. PMID:28111566

  19. A mathematical framework for yield (vs. rate) optimization in constraint-based modeling and applications in metabolic engineering.

    PubMed

    Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen

    2018-05-01

    The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small example and demonstrate their relevance for metabolic engineering with realistic models of E. coli. We develop a comprehensive mathematical framework for yield optimization in metabolic models. Our theory is particularly useful for the study and rational modification of cell factories designed under given yield and/or rate requirements. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  20. A knowledge-based approach to improving optimization techniques in system planning

    NASA Technical Reports Server (NTRS)

    Momoh, J. A.; Zhang, Z. Z.

    1990-01-01

    A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.

  1. Mathematical-Programming Approaches to Test Item Pool Design. Research Report.

    ERIC Educational Resources Information Center

    Veldkamp, Bernard P.; van der Linden, Wim J.; Ariel, Adelaide

    This paper presents an approach to item pool design that has the potential to improve on the quality of current item pools in educational and psychological testing and thus to increase both measurement precision and validity. The approach consists of the application of mathematical programming techniques to calculate optimal blueprints for item…

  2. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.

    2015-01-01

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279

  3. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C

    2016-02-15

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D -, A - and E -optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D -optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.

  4. Computation of physiological human vocal fold parameters by mathematical optimization of a biomechanical model

    PubMed Central

    Yang, Anxiong; Stingl, Michael; Berry, David A.; Lohscheller, Jörg; Voigt, Daniel; Eysholdt, Ulrich; Döllinger, Michael

    2011-01-01

    With the use of an endoscopic, high-speed camera, vocal fold dynamics may be observed clinically during phonation. However, observation and subjective judgment alone may be insufficient for clinical diagnosis and documentation of improved vocal function, especially when the laryngeal disease lacks any clear morphological presentation. In this study, biomechanical parameters of the vocal folds are computed by adjusting the corresponding parameters of a three-dimensional model until the dynamics of both systems are similar. First, a mathematical optimization method is presented. Next, model parameters (such as pressure, tension and masses) are adjusted to reproduce vocal fold dynamics, and the deduced parameters are physiologically interpreted. Various combinations of global and local optimization techniques are attempted. Evaluation of the optimization procedure is performed using 50 synthetically generated data sets. The results show sufficient reliability, including 0.07 normalized error, 96% correlation, and 91% accuracy. The technique is also demonstrated on data from human hemilarynx experiments, in which a low normalized error (0.16) and high correlation (84%) values were achieved. In the future, this technique may be applied to clinical high-speed images, yielding objective measures with which to document improved vocal function of patients with voice disorders. PMID:21877808

  5. Structural optimization of framed structures using generalized optimality criteria

    NASA Technical Reports Server (NTRS)

    Kolonay, R. M.; Venkayya, Vipperla B.; Tischler, V. A.; Canfield, R. A.

    1989-01-01

    The application of a generalized optimality criteria to framed structures is presented. The optimality conditions, Lagrangian multipliers, resizing algorithm, and scaling procedures are all represented as a function of the objective and constraint functions along with their respective gradients. The optimization of two plane frames under multiple loading conditions subject to stress, displacement, generalized stiffness, and side constraints is presented. These results are compared to those found by optimizing the frames using a nonlinear mathematical programming technique.

  6. Mathematical description and program documentation for CLASSY, an adaptive maximum likelihood clustering method

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Rassbach, M. E.

    1979-01-01

    Discussed in this report is the clustering algorithm CLASSY, including detailed descriptions of its general structure and mathematical background and of the various major subroutines. The report provides a development of the logic and equations used with specific reference to program variables. Some comments on timing and proposed optimization techniques are included.

  7. Analysis of Prospective Mathematics Teachers’ Basic Teaching Skills (a Study of Mathematics Education Departement Students’ Field Experience Program at STKIP Garut)

    NASA Astrophysics Data System (ADS)

    Rahayu, D. V.

    2017-02-01

    This study was intended to figure out basic teaching skills of Mathematics Department Students of STKIP Garut at Field Experience Program in academic year 2014/2015. This study was qualitative research with analysis descriptive technique. Instrument used in this study was observation sheet to measure basic teaching mathematics skills. The result showed that ability of content mastery and explaining skill were in average category. Questioning skill, conducting variations skill and conducting assessment skill were in good category. Managing classroom skill and giving motivation skill were in poor category. Based on the result, it can be concluded that the students’ basic teaching skills weren’t optimal. It is recommended for the collegians to get lesson with appropriate strategy so that they can optimize their basic teaching skills.

  8. Class and Home Problems: Optimization Problems

    ERIC Educational Resources Information Center

    Anderson, Brian J.; Hissam, Robin S.; Shaeiwitz, Joseph A.; Turton, Richard

    2011-01-01

    Optimization problems suitable for all levels of chemical engineering students are available. These problems do not require advanced mathematical techniques, since they can be solved using typical software used by students and practitioners. The method used to solve these problems forces students to understand the trends for the different terms…

  9. Software Partitioning Schemes for Advanced Simulation Computer Systems. Final Report.

    ERIC Educational Resources Information Center

    Clymer, S. J.

    Conducted to design software partitioning techniques for use by the Air Force to partition a large flight simulator program for optimal execution on alternative configurations, this study resulted in a mathematical model which defines characteristics for an optimal partition, and a manually demonstrated partitioning algorithm design which…

  10. Calculation of Pareto-optimal solutions to multiple-objective problems using threshold-of-acceptability constraints

    NASA Technical Reports Server (NTRS)

    Giesy, D. P.

    1978-01-01

    A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.

  11. Survey of optimization techniques for nonlinear spacecraft trajectory searches

    NASA Technical Reports Server (NTRS)

    Wang, Tseng-Chan; Stanford, Richard H.; Sunseri, Richard F.; Breckheimer, Peter J.

    1988-01-01

    Mathematical analysis of the optimal search of a nonlinear spacecraft trajectory to arrive at a set of desired targets is presented. A high precision integrated trajectory program and several optimization software libraries are used to search for a converged nonlinear spacecraft trajectory. Several examples for the Galileo Jupiter Orbiter and the Ocean Topography Experiment (TOPEX) are presented that illustrate a variety of the optimization methods used in nonlinear spacecraft trajectory searches.

  12. Selected Bibliography on Optimizing Techniques in Statistics

    DTIC Science & Technology

    1981-08-01

    problems in business, industry and .ogovern nt ae f rmulated as optimization problem. Topics in optimization constitute an essential area of study in...numerical, iii) mathematical programming, and (iv) variational. We provide pertinent references with statistical applications Sin the above areas in Part I...TMS Advanced Studies in Managentnt Sciences, North-Holland PIIENli iiiany, Amsterdam. (To appear.) Spang, H. A. (1962). A review of minimization

  13. A mathematical tool to generate complex whole body motor tasks and test hypotheses on underlying motor planning.

    PubMed

    Tagliabue, Michele; Pedrocchi, Alessandra; Pozzo, Thierry; Ferrigno, Giancarlo

    2008-01-01

    In spite of the complexity of human motor behavior, difficulties in mathematical modeling have restricted to rather simple movements attempts to identify the motor planning criterion used by the central nervous system. This paper presents a novel-simulation technique able to predict the "desired trajectory" corresponding to a wide range of kinematic and kinetic optimality criteria for tasks involving many degrees of freedom and the coordination between goal achievement and balance maintenance. Employment of proper time discretization, inverse dynamic methods and constrained optimization technique are combined. The application of this simulator to a planar whole body pointing movement shows its effectiveness in managing system nonlinearities and instability as well as in ensuring the anatomo-physiological feasibility of predicted motor plans. In addition, the simulator's capability to simultaneously optimize competing movement aspects represents an interesting opportunity for the motor control community, in which the coexistence of several controlled variables has been hypothesized.

  14. A system approach to aircraft optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1991-01-01

    Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. Techniques are outlined for computing these influences as system design derivatives useful for both judgemental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering organizations and incorporate their design tools.

  15. Application of Monte Carlo techniques to optimization of high-energy beam transport in a stochastic environment

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.

    1971-01-01

    An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.

  16. A Dynamic Process Model for Optimizing the Hospital Environment Cash-Flow

    NASA Astrophysics Data System (ADS)

    Pater, Flavius; Rosu, Serban

    2011-09-01

    In this article is presented a new approach to some fundamental techniques of solving dynamic programming problems with the use of functional equations. We will analyze the problem of minimizing the cost of treatment in a hospital environment. Mathematical modeling of this process leads to an optimal control problem with a finite horizon.

  17. Optomechanical study and optimization of cantilever plate dynamics

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    1995-06-01

    Optimum dynamic characteristics of an aluminum cantilever plate containing holes of different sizes and located at arbitrary positions on the plate are studied computationally and experimentally. The objective function of this optimization is the minimization/maximization of the natural frequencies of the plate in terms of such design variable s as the sizes and locations of the holes. The optimization process is performed using the finite element method and mathematical programming techniques in order to obtain the natural frequencies and the optimum conditions of the plate, respectively. The modal behavior of the resultant optimal plate layout is studied experimentally through the use of holographic interferometry techniques. Comparisons of the computational and experimental results show that good agreement between theory and test is obtained. The comparisons also show that the combined, or hybrid use of experimental and computational techniques complement each other and prove to be a very efficient tool for performing optimization studies of mechanical components.

  18. Optimization and analysis of large chemical kinetic mechanisms using the solution mapping method - Combustion of methane

    NASA Technical Reports Server (NTRS)

    Frenklach, Michael; Wang, Hai; Rabinowitz, Martin J.

    1992-01-01

    A method of systematic optimization, solution mapping, as applied to a large-scale dynamic model is presented. The basis of the technique is parameterization of model responses in terms of model parameters by simple algebraic expressions. These expressions are obtained by computer experiments arranged in a factorial design. The developed parameterized responses are then used in a joint multiparameter multidata-set optimization. A brief review of the mathematical background of the technique is given. The concept of active parameters is discussed. The technique is applied to determine an optimum set of parameters for a methane combustion mechanism. Five independent responses - comprising ignition delay times, pre-ignition methyl radical concentration profiles, and laminar premixed flame velocities - were optimized with respect to thirteen reaction rate parameters. The numerical predictions of the optimized model are compared to those computed with several recent literature mechanisms. The utility of the solution mapping technique in situations where the optimum is not unique is also demonstrated.

  19. Optimizing Marine Corps Pilot Conversion to the Joint Strike Fighter

    DTIC Science & Technology

    2010-06-01

    Office of Civilian Manpower Management. Included are a summary of modeling efforts and the mathematics behind them, models for aggregate manpower...Vajda for the Admiralty of the Royal Navy. They also mention work by one of the first actuaries , John Rowe, who as early as 1779, conducted studies...idea of using mathematical and statistical techniques to obtain better information on the manpower requirements has its roots in personnel research

  20. A case study on topology optimized design for additive manufacturing

    NASA Astrophysics Data System (ADS)

    Gebisa, A. W.; Lemu, H. G.

    2017-12-01

    Topology optimization is an optimization method that employs mathematical tools to optimize material distribution in a part to be designed. Earlier developments of topology optimization considered conventional manufacturing techniques that have limitations in producing complex geometries. This has hindered the topology optimization efforts not to fully be realized. With the emergence of additive manufacturing (AM) technologies, the technology that builds a part layer upon a layer directly from three dimensional (3D) model data of the part, however, producing complex shape geometry is no longer an issue. Realization of topology optimization through AM provides full design freedom for the design engineers. The article focuses on topologically optimized design approach for additive manufacturing with a case study on lightweight design of jet engine bracket. The study result shows that topology optimization is a powerful design technique to reduce the weight of a product while maintaining the design requirements if additive manufacturing is considered.

  1. Modeling of tool path for the CNC sheet cutting machines

    NASA Astrophysics Data System (ADS)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  2. Steady-state global optimization of metabolic non-linear dynamic models through recasting into power-law canonical models

    PubMed Central

    2011-01-01

    Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520

  3. An efficient multilevel optimization method for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Yang, Y. J.; Kim, D. S.

    1988-01-01

    An efficient multilevel deisgn optimization technique is presented. The proposed method is based on the concept of providing linearized information between the system level and subsystem level optimization tasks. The advantages of the method are that it does not require optimum sensitivities, nonlinear equality constraints are not needed, and the method is relatively easy to use. The disadvantage is that the coupling between subsystems is not dealt with in a precise mathematical manner.

  4. Multi-Innovation Gradient Iterative Locally Weighted Learning Identification for A Nonlinear Ship Maneuvering System

    NASA Astrophysics Data System (ADS)

    Bai, Wei-wei; Ren, Jun-sheng; Li, Tie-shan

    2018-06-01

    This paper explores a highly accurate identification modeling approach for the ship maneuvering motion with fullscale trial. A multi-innovation gradient iterative (MIGI) approach is proposed to optimize the distance metric of locally weighted learning (LWL), and a novel non-parametric modeling technique is developed for a nonlinear ship maneuvering system. This proposed method's advantages are as follows: first, it can avoid the unmodeled dynamics and multicollinearity inherent to the conventional parametric model; second, it eliminates the over-learning or underlearning and obtains the optimal distance metric; and third, the MIGI is not sensitive to the initial parameter value and requires less time during the training phase. These advantages result in a highly accurate mathematical modeling technique that can be conveniently implemented in applications. To verify the characteristics of this mathematical model, two examples are used as the model platforms to study the ship maneuvering.

  5. Tuning of PID controller using optimization techniques for a MIMO process

    NASA Astrophysics Data System (ADS)

    Thulasi dharan, S.; Kavyarasan, K.; Bagyaveereswaran, V.

    2017-11-01

    In this paper, two processes were considered one is Quadruple tank process and the other is CSTR (Continuous Stirred Tank Reactor) process. These are majorly used in many industrial applications for various domains, especially, CSTR in chemical plants.At first mathematical model of both the process is to be done followed by linearization of the system due to MIMO process and controllers are the major part to control the whole process to our desired point as per the applications so the tuning of the controller plays a major role among the whole process. For tuning of parameters we use two optimizations techniques like Particle Swarm Optimization, Genetic Algorithm. The above techniques are majorly used in different applications to obtain which gives the best among all, we use these techniques to obtain the best tuned values among many. Finally, we will compare the performance of the each process with both the techniques.

  6. Risk Analysis for Resource Planning Optimization

    NASA Technical Reports Server (NTRS)

    Chueng, Kar-Ming

    2008-01-01

    The main purpose of this paper is to introduce a risk management approach that allows planners to quantify the risk and efficiency tradeoff in the presence of uncertainties, and to make forward-looking choices in the development and execution of the plan. Demonstrate a planning and risk analysis framework that tightly integrates mathematical optimization, empirical simulation, and theoretical analysis techniques to solve complex problems.

  7. Optimization techniques applied to spectrum management for communications satellites

    NASA Astrophysics Data System (ADS)

    Ottey, H. R.; Sullivan, T. M.; Zusman, F. S.

    This paper describes user requirements, algorithms and software design features for the application of optimization techniques to the management of the geostationary orbit/spectrum resource. Relevant problems include parameter sensitivity analyses, frequency and orbit position assignment coordination, and orbit position allotment planning. It is shown how integer and nonlinear programming as well as heuristic search techniques can be used to solve these problems. Formalized mathematical objective functions that define the problems are presented. Constraint functions that impart the necessary solution bounds are described. A versatile program structure is outlined, which would allow problems to be solved in stages while varying the problem space, solution resolution, objective function and constraints.

  8. Restart Operator Meta-heuristics for a Problem-Oriented Evolutionary Strategies Algorithm in Inverse Mathematical MISO Modelling Problem Solving

    NASA Astrophysics Data System (ADS)

    Ryzhikov, I. S.; Semenkin, E. S.

    2017-02-01

    This study is focused on solving an inverse mathematical modelling problem for dynamical systems based on observation data and control inputs. The mathematical model is being searched in the form of a linear differential equation, which determines the system with multiple inputs and a single output, and a vector of the initial point coordinates. The described problem is complex and multimodal and for this reason the proposed evolutionary-based optimization technique, which is oriented on a dynamical system identification problem, was applied. To improve its performance an algorithm restart operator was implemented.

  9. Low order H∞ optimal control for ACFA blended wing body aircraft

    NASA Astrophysics Data System (ADS)

    Haniš, T.; Kucera, V.; Hromčík, M.

    2013-12-01

    Advanced nonconvex nonsmooth optimization techniques for fixed-order H∞ robust control are proposed in this paper for design of flight control systems (FCS) with prescribed structure. Compared to classical techniques - tuning of and successive closures of particular single-input single-output (SISO) loops like dampers, attitude stabilizers, etc. - all loops are designed simultaneously by means of quite intuitive weighting filters selection. In contrast to standard optimization techniques, though (H2, H∞ optimization), the resulting controller respects the prescribed structure in terms of engaged channels and orders (e. g., proportional (P), proportional-integral (PI), and proportional-integralderivative (PID) controllers). In addition, robustness with regard to multimodel uncertainty is also addressed which is of most importance for aerospace applications as well. Such a way, robust controllers for various Mach numbers, altitudes, or mass cases can be obtained directly, based only on particular mathematical models for respective combinations of the §ight parameters.

  10. Magic in the machine: a computational magician's assistant.

    PubMed

    Williams, Howard; McOwan, Peter W

    2014-01-01

    A human magician blends science, psychology, and performance to create a magical effect. In this paper we explore what can be achieved when that human intelligence is replaced or assisted by machine intelligence. Magical effects are all in some form based on hidden mathematical, scientific, or psychological principles; often the parameters controlling these underpinning techniques are hard for a magician to blend to maximize the magical effect required. The complexity is often caused by interacting and often conflicting physical and psychological constraints that need to be optimally balanced. Normally this tuning is done by trial and error, combined with human intuitions. Here we focus on applying Artificial Intelligence methods to the creation and optimization of magic tricks exploiting mathematical principles. We use experimentally derived data about particular perceptual and cognitive features, combined with a model of the underlying mathematical process to provide a psychologically valid metric to allow optimization of magical impact. In the paper we introduce our optimization methodology and describe how it can be flexibly applied to a range of different types of mathematics based tricks. We also provide two case studies as exemplars of the methodology at work: a magical jigsaw, and a mind reading card trick effect. We evaluate each trick created through testing in laboratory and public performances, and further demonstrate the real world efficacy of our approach for professional performers through sales of the tricks in a reputable magic shop in London.

  11. Optimal multi-floor plant layout based on the mathematical programming and particle swarm optimization.

    PubMed

    Lee, Chang Jun

    2015-01-01

    In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study.

  12. The reduced space Sequential Quadratic Programming (SQP) method for calculating the worst resonance response of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao; Wu, Wenwang; Fang, Daining

    2018-07-01

    A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.

  13. Approximation concepts for efficient structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Miura, H.

    1976-01-01

    It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.

  14. Optimization of numerical weather/wave prediction models based on information geometry and computational techniques

    NASA Astrophysics Data System (ADS)

    Galanis, George; Famelis, Ioannis; Kalogeri, Christina

    2014-10-01

    The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.

  15. Mathematical programming for the efficient allocation of health care resources.

    PubMed

    Stinnett, A A; Paltiel, A D

    1996-10-01

    Previous discussions of methods for the efficient allocation of health care resources subject to a budget constraint have relied on unnecessarily restrictive assumptions. This paper makes use of established optimization techniques to demonstrate that a general mathematical programming framework can accommodate much more complex information regarding returns to scale, partial and complete indivisibility and program interdependence. Methods are also presented for incorporating ethical constraints into the resource allocation process, including explicit identification of the cost of equity.

  16. A reducing of a chaotic movement to a periodic orbit, of a micro-electro-mechanical system, by using an optimal linear control design

    NASA Astrophysics Data System (ADS)

    Chavarette, Fábio Roberto; Balthazar, José Manoel; Felix, Jorge L. P.; Rafikov, Marat

    2009-05-01

    This paper analyzes the non-linear dynamics, with a chaotic behavior of a particular micro-electro-mechanical system. We used a technique of the optimal linear control for reducing the irregular (chaotic) oscillatory movement of the non-linear systems to a periodic orbit. We use the mathematical model of a (MEMS) proposed by Luo and Wang.

  17. Systems oncology: towards patient-specific treatment regimes informed by multiscale mathematical modelling.

    PubMed

    Powathil, Gibin G; Swat, Maciej; Chaplain, Mark A J

    2015-02-01

    The multiscale complexity of cancer as a disease necessitates a corresponding multiscale modelling approach to produce truly predictive mathematical models capable of improving existing treatment protocols. To capture all the dynamics of solid tumour growth and its progression, mathematical modellers need to couple biological processes occurring at various spatial and temporal scales (from genes to tissues). Because effectiveness of cancer therapy is considerably affected by intracellular and extracellular heterogeneities as well as by the dynamical changes in the tissue microenvironment, any model attempt to optimise existing protocols must consider these factors ultimately leading to improved multimodal treatment regimes. By improving existing and building new mathematical models of cancer, modellers can play important role in preventing the use of potentially sub-optimal treatment combinations. In this paper, we analyse a multiscale computational mathematical model for cancer growth and spread, incorporating the multiple effects of radiation therapy and chemotherapy in the patient survival probability and implement the model using two different cell based modelling techniques. We show that the insights provided by such multiscale modelling approaches can ultimately help in designing optimal patient-specific multi-modality treatment protocols that may increase patients quality of life. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. The art of spacecraft design: A multidisciplinary challenge

    NASA Technical Reports Server (NTRS)

    Abdi, F.; Ide, H.; Levine, M.; Austel, L.

    1989-01-01

    Actual design turn-around time has become shorter due to the use of optimization techniques which have been introduced into the design process. It seems that what, how and when to use these optimization techniques may be the key factor for future aircraft engineering operations. Another important aspect of this technique is that complex physical phenomena can be modeled by a simple mathematical equation. The new powerful multilevel methodology reduces time-consuming analysis significantly while maintaining the coupling effects. This simultaneous analysis method stems from the implicit function theorem and system sensitivity derivatives of input variables. Use of the Taylor's series expansion and finite differencing technique for sensitivity derivatives in each discipline makes this approach unique for screening dominant variables from nondominant variables. In this study, the current Computational Fluid Dynamics (CFD) aerodynamic and sensitivity derivative/optimization techniques are applied for a simple cone-type forebody of a high-speed vehicle configuration to understand basic aerodynamic/structure interaction in a hypersonic flight condition.

  19. A New Hybrid BFOA-PSO Optimization Technique for Decoupling and Robust Control of Two-Coupled Distillation Column Process.

    PubMed

    Abdelkarim, Noha; Mohamed, Amr E; El-Garhy, Ahmed M; Dorrah, Hassen T

    2016-01-01

    The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller.

  20. A New Hybrid BFOA-PSO Optimization Technique for Decoupling and Robust Control of Two-Coupled Distillation Column Process

    PubMed Central

    Mohamed, Amr E.; Dorrah, Hassen T.

    2016-01-01

    The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller. PMID:27807444

  1. The Sizing and Optimization Language (SOL): A computer language to improve the user/optimizer interface

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Scotti, S. J.

    1989-01-01

    The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.

  2. Magic in the machine: a computational magician's assistant

    PubMed Central

    Williams, Howard; McOwan, Peter W.

    2014-01-01

    A human magician blends science, psychology, and performance to create a magical effect. In this paper we explore what can be achieved when that human intelligence is replaced or assisted by machine intelligence. Magical effects are all in some form based on hidden mathematical, scientific, or psychological principles; often the parameters controlling these underpinning techniques are hard for a magician to blend to maximize the magical effect required. The complexity is often caused by interacting and often conflicting physical and psychological constraints that need to be optimally balanced. Normally this tuning is done by trial and error, combined with human intuitions. Here we focus on applying Artificial Intelligence methods to the creation and optimization of magic tricks exploiting mathematical principles. We use experimentally derived data about particular perceptual and cognitive features, combined with a model of the underlying mathematical process to provide a psychologically valid metric to allow optimization of magical impact. In the paper we introduce our optimization methodology and describe how it can be flexibly applied to a range of different types of mathematics based tricks. We also provide two case studies as exemplars of the methodology at work: a magical jigsaw, and a mind reading card trick effect. We evaluate each trick created through testing in laboratory and public performances, and further demonstrate the real world efficacy of our approach for professional performers through sales of the tricks in a reputable magic shop in London. PMID:25452736

  3. Mathematical simulation and optimization of cutting mode in turning of workpieces made of nickel-based heat-resistant alloy

    NASA Astrophysics Data System (ADS)

    Bogoljubova, M. N.; Afonasov, A. I.; Kozlov, B. N.; Shavdurov, D. E.

    2018-05-01

    A predictive simulation technique of optimal cutting modes in the turning of workpieces made of nickel-based heat-resistant alloys, different from the well-known ones, is proposed. The impact of various factors on the cutting process with the purpose of determining optimal parameters of machining in concordance with certain effectiveness criteria is analyzed in the paper. A mathematical model of optimization, algorithms and computer programmes, visual graphical forms reflecting dependences of the effectiveness criteria – productivity, net cost, and tool life on parameters of the technological process - have been worked out. A nonlinear model for multidimensional functions, “solution of the equation with multiple unknowns”, “a coordinate descent method” and heuristic algorithms are accepted to solve the problem of optimization of cutting mode parameters. Research shows that in machining of workpieces made from heat-resistant alloy AISI N07263, the highest possible productivity will be achieved with the following parameters: cutting speed v = 22.1 m/min., feed rate s=0.26 mm/rev; tool life T = 18 min.; net cost – 2.45 per hour.

  4. Dynamic optimization of distributed biological systems using robust and efficient numerical techniques.

    PubMed

    Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A

    2012-07-02

    Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of generic distributed biological systems.

  5. Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.

    2011-08-01

    The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.

  6. Optimization and Control of Agent-Based Models in Biology: A Perspective.

    PubMed

    An, G; Fitzpatrick, B G; Christley, S; Federico, P; Kanarek, A; Neilan, R Miller; Oremland, M; Salinas, R; Laubenbacher, R; Lenhart, S

    2017-01-01

    Agent-based models (ABMs) have become an increasingly important mode of inquiry for the life sciences. They are particularly valuable for systems that are not understood well enough to build an equation-based model. These advantages, however, are counterbalanced by the difficulty of analyzing and using ABMs, due to the lack of the type of mathematical tools available for more traditional models, which leaves simulation as the primary approach. As models become large, simulation becomes challenging. This paper proposes a novel approach to two mathematical aspects of ABMs, optimization and control, and it presents a few first steps outlining how one might carry out this approach. Rather than viewing the ABM as a model, it is to be viewed as a surrogate for the actual system. For a given optimization or control problem (which may change over time), the surrogate system is modeled instead, using data from the ABM and a modeling framework for which ready-made mathematical tools exist, such as differential equations, or for which control strategies can explored more easily. Once the optimization problem is solved for the model of the surrogate, it is then lifted to the surrogate and tested. The final step is to lift the optimization solution from the surrogate system to the actual system. This program is illustrated with published work, using two relatively simple ABMs as a demonstration, Sugarscape and a consumer-resource ABM. Specific techniques discussed include dimension reduction and approximation of an ABM by difference equations as well systems of PDEs, related to certain specific control objectives. This demonstration illustrates the very challenging mathematical problems that need to be solved before this approach can be realistically applied to complex and large ABMs, current and future. The paper outlines a research program to address them.

  7. Optimization of cryoprotectant loading into murine and human oocytes.

    PubMed

    Karlsson, Jens O M; Szurek, Edyta A; Higgins, Adam Z; Lee, Sang R; Eroglu, Ali

    2014-02-01

    Loading of cryoprotectants into oocytes is an important step of the cryopreservation process, in which the cells are exposed to potentially damaging osmotic stresses and chemical toxicity. Thus, we investigated the use of physics-based mathematical optimization to guide design of cryoprotectant loading methods for mouse and human oocytes. We first examined loading of 1.5 M dimethyl sulfoxide (Me(2)SO) into mouse oocytes at 23°C. Conventional one-step loading resulted in rates of fertilization (34%) and embryonic development (60%) that were significantly lower than those of untreated controls (95% and 94%, respectively). In contrast, the mathematically optimized two-step method yielded much higher rates of fertilization (85%) and development (87%). To examine the causes for oocyte damage, we performed experiments to separate the effects of cell shrinkage and Me(2)SO exposure time, revealing that neither shrinkage nor Me(2)SO exposure single-handedly impairs the fertilization and development rates. Thus, damage during one-step Me(2)SO addition appears to result from interactions between the effects of Me(2)SO toxicity and osmotic stress. We also investigated Me(2)SO loading into mouse oocytes at 30°C. At this temperature, fertilization rates were again lower after one-step loading (8%) in comparison to mathematically optimized two-step loading (86%) and untreated controls (96%). Furthermore, our computer algorithm generated an effective strategy for reducing Me(2)SO exposure time, using hypotonic diluents for cryoprotectant solutions. With this technique, 1.5 M Me(2)SO was successfully loaded in only 2.5 min, with 92% fertilizability. Based on these promising results, we propose new methods to load cryoprotectants into human oocytes, designed using our mathematical optimization approach. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Optimization of Cryoprotectant Loading into Murine and Human Oocytes

    PubMed Central

    Karlsson, Jens O.M.; Szurek, Edyta A.; Higgins, Adam Z.; Lee, Sang R.; Eroglu, Ali

    2014-01-01

    Loading of cryoprotectants into oocytes is an important step of the cryopreservation process, in which the cells are exposed to potentially damaging osmotic stresses and chemical toxicity. Thus, we investigated the use of physics-based mathematical optimization to guide design of cryoprotectant loading methods for mouse and human oocytes. We first examined loading of 1.5 M dimethylsulfoxide (Me2SO) into mouse oocytes at 23°C. Conventional one-step loading resulted in rates of fertilization (34%) and embryonic development (60%) that were significantly lower than those of untreated controls (95% and 94%, respectively). In contrast, the mathematically optimized two-step method yielded much higher rates of fertilization (85%) and development (87%). To examine the causes for oocyte damage, we performed experiments to separate the effects of cell shrinkage and Me2SO exposure time, revealing that neither shrinkage nor Me2SO exposure single-handedly impairs the fertilization and development rates. Thus, damage during one-step Me2SO addition appears to result from interactions between the effects of Me2SO toxicity and osmotic stress. We also investigated Me2SO loading into mouse oocytes at 30°C. At this temperature, fertilization rates were again lower after one-step loading (8%) in comparison to mathematically optimized two-step loading (86%) and untreated controls (96%). Furthermore, our computer algorithm generated an effective strategy for reducing Me2SO exposure time, using hypotonic diluents for cryoprotectant solutions. With this technique, 1.5 M Me2SO was successfully loaded in only 2.5 min, with 92% fertilizability. Based on these promising results, we propose new methods to load cryoprotectants into human oocytes, designed using our mathematical optimization approach. PMID:24246951

  9. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology.

    PubMed

    Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  10. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

    PubMed Central

    Faltermeier, Rupert; Proescholdt, Martin A.; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses. PMID:26693250

  11. Optimization of structures undergoing harmonic or stochastic excitation. Ph.D. Thesis; [atmospheric turbulence and white noise

    NASA Technical Reports Server (NTRS)

    Johnson, E. H.

    1975-01-01

    The optimal design was investigated of simple structures subjected to dynamic loads, with constraints on the structures' responses. Optimal designs were examined for one dimensional structures excited by harmonically oscillating loads, similar structures excited by white noise, and a wing in the presence of continuous atmospheric turbulence. The first has constraints on the maximum allowable stress while the last two place bounds on the probability of failure of the structure. Approximations were made to replace the time parameter with a frequency parameter. For the first problem, this involved the steady state response, and in the remaining cases, power spectral techniques were employed to find the root mean square values of the responses. Optimal solutions were found by using computer algorithms which combined finite elements methods with optimization techniques based on mathematical programming. It was found that the inertial loads for these dynamic problems result in optimal structures that are radically different from those obtained for structures loaded statically by forces of comparable magnitude.

  12. Global Optimization of Low-Thrust Interplanetary Trajectories Subject to Operational Constraints

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.; Hinckley, David

    2016-01-01

    Low-thrust interplanetary space missions are highly complex and there can be many locally optimal solutions. While several techniques exist to search for globally optimal solutions to low-thrust trajectory design problems, they are typically limited to unconstrained trajectories. The operational design community in turn has largely avoided using such techniques and has primarily focused on accurate constrained local optimization combined with grid searches and intuitive design processes at the expense of efficient exploration of the global design space. This work is an attempt to bridge the gap between the global optimization and operational design communities by presenting a mathematical framework for global optimization of low-thrust trajectories subject to complex constraints including the targeting of planetary landing sites, a solar range constraint to simplify the thermal design of the spacecraft, and a real-world multi-thruster electric propulsion system that must switch thrusters on and off as available power changes over the course of a mission.

  13. Applications of numerical optimization methods to helicopter design problems: A survey

    NASA Technical Reports Server (NTRS)

    Miura, H.

    1984-01-01

    A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.

  14. Applications of numerical optimization methods to helicopter design problems - A survey

    NASA Technical Reports Server (NTRS)

    Miura, H.

    1985-01-01

    A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.

  15. Applications of numerical optimization methods to helicopter design problems - A survey

    NASA Technical Reports Server (NTRS)

    Miura, H.

    1984-01-01

    A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.

  16. Mathematical Analysis and Optimization of Infiltration Processes

    NASA Technical Reports Server (NTRS)

    Chang, H.-C.; Gottlieb, D.; Marion, M.; Sheldon, B. W.

    1997-01-01

    A variety of infiltration techniques can be used to fabricate solid materials, particularly composites. In general these processes can be described with at least one time dependent partial differential equation describing the evolution of the solid phase, coupled to one or more partial differential equations describing mass transport through a porous structure. This paper presents a detailed mathematical analysis of a relatively simple set of equations which is used to describe chemical vapor infiltration. The results demonstrate that the process is controlled by only two parameters, alpha and beta. The optimization problem associated with minimizing the infiltration time is also considered. Allowing alpha and beta to vary with time leads to significant reductions in the infiltration time, compared with the conventional case where alpha and beta are treated as constants.

  17. Design of a candidate flutter suppression control law for DAST ARW-2. [Drones for Aerodynamic and Structural Testing Aeroelastic Research Wing

    NASA Technical Reports Server (NTRS)

    Adams, W. M., Jr.; Tiffany, S. H.

    1983-01-01

    A control law is developed to suppress symmetric flutter for a mathematical model of an aeroelastic research vehicle. An implementable control law is attained by including modified LQG (linear quadratic Gaussian) design techniques, controller order reduction, and gain scheduling. An alternate (complementary) design approach is illustrated for one flight condition wherein nongradient-based constrained optimization techniques are applied to maximize controller robustness.

  18. Numerical optimization methods for controlled systems with parameters

    NASA Astrophysics Data System (ADS)

    Tyatyushkin, A. I.

    2017-10-01

    First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.

  19. Mathematical Model and Artificial Intelligent Techniques Applied to a Milk Industry through DSM

    NASA Astrophysics Data System (ADS)

    Babu, P. Ravi; Divya, V. P. Sree

    2011-08-01

    The resources for electrical energy are depleting and hence the gap between the supply and the demand is continuously increasing. Under such circumstances, the option left is optimal utilization of available energy resources. The main objective of this chapter is to discuss about the Peak load management and overcome the problems associated with it in processing industries such as Milk industry with the help of DSM techniques. The chapter presents a generalized mathematical model for minimizing the total operating cost of the industry subject to the constraints. The work presented in this chapter also deals with the results of application of Neural Network, Fuzzy Logic and Demand Side Management (DSM) techniques applied to a medium scale milk industrial consumer in India to achieve the improvement in load factor, reduction in Maximum Demand (MD) and also the consumer gets saving in the energy bill.

  20. Model correlation and damage location for large space truss structures: Secant method development and evaluation

    NASA Technical Reports Server (NTRS)

    Smith, Suzanne Weaver; Beattie, Christopher A.

    1991-01-01

    On-orbit testing of a large space structure will be required to complete the certification of any mathematical model for the structure dynamic response. The process of establishing a mathematical model that matches measured structure response is referred to as model correlation. Most model correlation approaches have an identification technique to determine structural characteristics from the measurements of the structure response. This problem is approached with one particular class of identification techniques - matrix adjustment methods - which use measured data to produce an optimal update of the structure property matrix, often the stiffness matrix. New methods were developed for identification to handle problems of the size and complexity expected for large space structures. Further development and refinement of these secant-method identification algorithms were undertaken. Also, evaluation of these techniques is an approach for model correlation and damage location was initiated.

  1. Modified Fully Utilized Design (MFUD) Method for Stress and Displacement Constraints

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya; Gendy, Atef; Berke, Laszlo; Hopkins, Dale

    1997-01-01

    The traditional fully stressed method performs satisfactorily for stress-limited structural design. When this method is extended to include displacement limitations in addition to stress constraints, it is known as the fully utilized design (FUD). Typically, the FUD produces an overdesign, which is the primary limitation of this otherwise elegant method. We have modified FUD in an attempt to alleviate the limitation. This new method, called the modified fully utilized design (MFUD) method, has been tested successfully on a number of designs that were subjected to multiple loads and had both stress and displacement constraints. The solutions obtained with MFUD compare favorably with the optimum results that can be generated by using nonlinear mathematical programming techniques. The MFUD method appears to have alleviated the overdesign condition and offers the simplicity of a direct, fully stressed type of design method that is distinctly different from optimization and optimality criteria formulations. The MFUD method is being developed for practicing engineers who favor traditional design methods rather than methods based on advanced calculus and nonlinear mathematical programming techniques. The Integrated Force Method (IFM) was found to be the appropriate analysis tool in the development of the MFUD method. In this paper, the MFUD method and its optimality are presented along with a number of illustrative examples.

  2. Parallel Performance of a Combustion Chemistry Simulation

    DOE PAGES

    Skinner, Gregg; Eigenmann, Rudolf

    1995-01-01

    We used a description of a combustion simulation's mathematical and computational methods to develop a version for parallel execution. The result was a reasonable performance improvement on small numbers of processors. We applied several important programming techniques, which we describe, in optimizing the application. This work has implications for programming languages, compiler design, and software engineering.

  3. Classification of mathematics deficiency using shape and scale analysis of 3D brain structures

    NASA Astrophysics Data System (ADS)

    Kurtek, Sebastian; Klassen, Eric; Gore, John C.; Ding, Zhaohua; Srivastava, Anuj

    2011-03-01

    We investigate the use of a recent technique for shape analysis of brain substructures in identifying learning disabilities in third-grade children. This Riemannian technique provides a quantification of differences in shapes of parameterized surfaces, using a distance that is invariant to rigid motions and re-parameterizations. Additionally, it provides an optimal registration across surfaces for improved matching and comparisons. We utilize an efficient gradient based method to obtain the optimal re-parameterizations of surfaces. In this study we consider 20 different substructures in the human brain and correlate the differences in their shapes with abnormalities manifested in deficiency of mathematical skills in 106 subjects. The selection of these structures is motivated in part by the past links between their shapes and cognitive skills, albeit in broader contexts. We have studied the use of both individual substructures and multiple structures jointly for disease classification. Using a leave-one-out nearest neighbor classifier, we obtained a 62.3% classification rate based on the shape of the left hippocampus. The use of multiple structures resulted in an improved classification rate of 71.4%.

  4. NASA/Howard University Large Space Structures Institute

    NASA Technical Reports Server (NTRS)

    Broome, T. H., Jr.

    1984-01-01

    Basic research on the engineering behavior of large space structures is presented. Methods of structural analysis, control, and optimization of large flexible systems are examined. Topics of investigation include the Load Correction Method (LCM) modeling technique, stabilization of flexible bodies by feedback control, mathematical refinement of analysis equations, optimization of the design of structural components, deployment dynamics, and the use of microprocessors in attitude and shape control of large space structures. Information on key personnel, budgeting, support plans and conferences is included.

  5. Fitting Nonlinear Curves by use of Optimization Techniques

    NASA Technical Reports Server (NTRS)

    Hill, Scott A.

    2005-01-01

    MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.

  6. Automated parameterization of intermolecular pair potentials using global optimization techniques

    NASA Astrophysics Data System (ADS)

    Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk

    2014-12-01

    In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.

  7. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem.

    PubMed

    Rajeswari, M; Amudhavel, J; Pothula, Sujatha; Dhavachelvan, P

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria.

  8. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem

    PubMed Central

    Amudhavel, J.; Pothula, Sujatha; Dhavachelvan, P.

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria. PMID:28473849

  9. An inequality for detecting financial fraud, derived from the Markowitz Optimal Portfolio Theory

    NASA Astrophysics Data System (ADS)

    Bard, Gregory V.

    2016-12-01

    The Markowitz Optimal Portfolio Theory, published in 1952, is well-known, and was often taught because it blends Lagrange Multipliers, matrices, statistics, and mathematical finance. However, the theory faded from prominence in American investing, as Business departments at US universities shifted from techniques based on mathematics, finance, and statistics, to focus instead on leadership, public speaking, interpersonal skills, advertising, etc… The author proposes a new application of Markowitz's Theory: the detection of a fairly broad category of financial fraud (called "Ponzi schemes" in American newspapers) by looking at a particular inequality derived from the Markowitz Optimal Portfolio Theory, relating volatility and expected rate of return. For example, one recent Ponzi scheme was that of Bernard Madoff, uncovered in December 2008, which comprised fraud totaling 64,800,000,000 US dollars [23]. The objective is to compare investments with the "efficient frontier" as predicted by Markowitz's theory. Violations of the inequality should be impossible in theory; therefore, in practice, violations might indicate fraud.

  10. Development of an empirical mathematical model for describing and optimizing the hygiene potential of a thermophilic anaerobic bioreactor treating faeces.

    PubMed

    Lübken, M; Wichern, M; Bischof, F; Prechtl, S; Horn, H

    2007-01-01

    Poor sanitation and insufficient disposal of sewage and faeces are primarily responsible for water associated health problems in developing countries. Domestic sewage and faeces are prevalently discharged into surface waters which are used by the inhabitants as a source for drinking water. This paper presents a decentralized anaerobic process technique for handling of such domestic organic waste. Such an efficient and compact system for treating faeces and food waste may be of great benefit for developing countries. Besides a stable biogas production for energy generation, the reduction of bacterial pathogens is of particular importance. In our research we investigated the removal capacity of the reactor concerning pathogens, which has been operated under thermophilic conditions. Faecal coliforms and intestinal enterococci have been detected as indicator organisms for bacterial pathogens. By the multiple regression analysis technique an empirical mathematical model has been developed. The model shows a high correlation between removal efficiency and both, hydraulic retention time (HRT) and temperature. By this model an optimized HRT for defined bacterial pathogens effluent standards can be easily calculated. Thus, hygiene potential can be evaluated along with economic aspects. In this paper not only results for describing the hygiene potential of a thermophilic anaerobic bioreactor are presented, but also an exemplary method to draw the right conclusions out of biological tests with the aid of mathematical tools.

  11. The role of optimization in the next generation of computer-based design tools

    NASA Technical Reports Server (NTRS)

    Rogan, J. Edward

    1989-01-01

    There is a close relationship between design optimization and the emerging new generation of computer-based tools for engineering design. With some notable exceptions, the development of these new tools has not taken full advantage of recent advances in numerical design optimization theory and practice. Recent work in the field of design process architecture has included an assessment of the impact of next-generation computer-based design tools on the design process. These results are summarized, and insights into the role of optimization in a design process based on these next-generation tools are presented. An example problem has been worked out to illustrate the application of this technique. The example problem - layout of an aircraft main landing gear - is one that is simple enough to be solved by many other techniques. Although the mathematical relationships describing the objective function and constraints for the landing gear layout problem can be written explicitly and are quite straightforward, an approximation technique has been used in the solution of this problem that can just as easily be applied to integrate supportability or producibility assessments using theory of measurement techniques into the design decision-making process.

  12. Optimal maintenance of a multi-unit system under dependencies

    NASA Astrophysics Data System (ADS)

    Sung, Ho-Joon

    The availability, or reliability, of an engineering component greatly influences the operational cost and safety characteristics of a modern system over its life-cycle. Until recently, the reliance on past empirical data has been the industry-standard practice to develop maintenance policies that provide the minimum level of system reliability. Because such empirically-derived policies are vulnerable to unforeseen or fast-changing external factors, recent advancements in the study of topic on maintenance, which is known as optimal maintenance problem, has gained considerable interest as a legitimate area of research. An extensive body of applicable work is available, ranging from those concerned with identifying maintenance policies aimed at providing required system availability at minimum possible cost, to topics on imperfect maintenance of multi-unit system under dependencies. Nonetheless, these existing mathematical approaches to solve for optimal maintenance policies must be treated with caution when considered for broader applications, as they are accompanied by specialized treatments to ease the mathematical derivation of unknown functions in both objective function and constraint for a given optimal maintenance problem. These unknown functions are defined as reliability measures in this thesis, and theses measures (e.g., expected number of failures, system renewal cycle, expected system up time, etc.) do not often lend themselves to possess closed-form formulas. It is thus quite common to impose simplifying assumptions on input probability distributions of components' lifetime or repair policies. Simplifying the complex structure of a multi-unit system to a k-out-of-n system by neglecting any sources of dependencies is another commonly practiced technique intended to increase the mathematical tractability of a particular model. This dissertation presents a proposal for an alternative methodology to solve optimal maintenance problems by aiming to achieve the same end-goals as Reliability Centered Maintenance (RCM). RCM was first introduced to the aircraft industry in an attempt to bridge the gap between the empirically-driven and theory-driven approaches to establishing optimal maintenance policies. Under RCM, qualitative processes that enable the prioritizing of functions based on the criticality and influence would be combined with mathematical modeling to obtain the optimal maintenance policies. Where this thesis work deviates from RCM is its proposal to directly apply quantitative processes to model the reliability measures in optimal maintenance problem. First, Monte Carlo (MC) simulation, in conjunction with a pre-determined Design of Experiments (DOE) table, can be used as a numerical means of obtaining the corresponding discrete simulated outcomes of the reliability measures based on the combination of decision variables (e.g., periodic preventive maintenance interval, trigger age for opportunistic maintenance, etc.). These discrete simulation results can then be regressed as Response Surface Equations (RSEs) with respect to the decision variables. Such an approach to represent the reliability measures with continuous surrogate functions (i.e., the RSEs) not only enables the application of the numerical optimization technique to solve for optimal maintenance policies, but also obviates the need to make mathematical assumptions or impose over-simplifications on the structure of a multi-unit system for the sake of mathematical tractability. The applicability of the proposed methodology to a real-world optimal maintenance problem is showcased through its application to a Time Limited Dispatch (TLD) of Full Authority Digital Engine Control (FADEC) system. In broader terms, this proof-of-concept exercise can be described as a constrained optimization problem, whose objective is to identify the optimal system inspection interval that guarantees a certain level of availability for a multi-unit system. A variety of reputable numerical techniques were used to model the problem as accurately as possible, including algorithms for the MC simulation, imperfect maintenance model from quasi renewal processes, repair time simulation, and state transition rules. Variance Reduction Techniques (VRTs) were also used in an effort to enhance MC simulation efficiency. After accurate MC simulation results are obtained, the RSEs are generated based on the goodness-of-fit measure to yield as parsimonious model as possible to construct the optimization problem. Under the assumption of constant failure rate for lifetime distributions, the inspection interval from the proposed methodology was found to be consistent with the one from the common approach used in industry that leverages Continuous Time Markov Chain (CTMC). While the latter does not consider maintenance cost settings, the proposed methodology enables an operator to consider different types of maintenance cost settings, e.g., inspection cost, system corrective maintenance cost, etc., to result in more flexible maintenance policies. When the proposed methodology was applied to the same TLD of FADEC example, but under the more generalized assumption of strictly Increasing Failure Rate (IFR) for lifetime distribution, it was shown to successfully capture component wear-out, as well as the economic dependencies among the system components.

  13. Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques

    NASA Astrophysics Data System (ADS)

    Elliott, Louie C.

    This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.

  14. Optimal harvesting for a predator-prey agent-based model using difference equations.

    PubMed

    Oremland, Matthew; Laubenbacher, Reinhard

    2015-03-01

    In this paper, a method known as Pareto optimization is applied in the solution of a multi-objective optimization problem. The system in question is an agent-based model (ABM) wherein global dynamics emerge from local interactions. A system of discrete mathematical equations is formulated in order to capture the dynamics of the ABM; while the original model is built up analytically from the rules of the model, the paper shows how minor changes to the ABM rule set can have a substantial effect on model dynamics. To address this issue, we introduce parameters into the equation model that track such changes. The equation model is amenable to mathematical theory—we show how stability analysis can be performed and validated using ABM data. We then reduce the equation model to a simpler version and implement changes to allow controls from the ABM to be tested using the equations. Cohen's weighted κ is proposed as a measure of similarity between the equation model and the ABM, particularly with respect to the optimization problem. The reduced equation model is used to solve a multi-objective optimization problem via a technique known as Pareto optimization, a heuristic evolutionary algorithm. Results show that the equation model is a good fit for ABM data; Pareto optimization provides a suite of solutions to the multi-objective optimization problem that can be implemented directly in the ABM.

  15. Safe Onboard Guidance and Control Under Probabilistic Uncertainty

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars James

    2011-01-01

    An algorithm was developed that determines the fuel-optimal spacecraft guidance trajectory that takes into account uncertainty, in order to guarantee that mission safety constraints are satisfied with the required probability. The algorithm uses convex optimization to solve for the optimal trajectory. Convex optimization is amenable to onboard solution due to its excellent convergence properties. The algorithm is novel because, unlike prior approaches, it does not require time-consuming evaluation of multivariate probability densities. Instead, it uses a new mathematical bounding approach to ensure that probability constraints are satisfied, and it is shown that the resulting optimization is convex. Empirical results show that the approach is many orders of magnitude less conservative than existing set conversion techniques, for a small penalty in computation time.

  16. Genetic algorithms using SISAL parallel programming language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tejada, S.

    1994-05-06

    Genetic algorithms are a mathematical optimization technique developed by John Holland at the University of Michigan [1]. The SISAL programming language possesses many of the characteristics desired to implement genetic algorithms. SISAL is a deterministic, functional programming language which is inherently parallel. Because SISAL is functional and based on mathematical concepts, genetic algorithms can be efficiently translated into the language. Several of the steps involved in genetic algorithms, such as mutation, crossover, and fitness evaluation, can be parallelized using SISAL. In this paper I will l discuss the implementation and performance of parallel genetic algorithms in SISAL.

  17. Price schedules coordination for electricity pool markets

    NASA Astrophysics Data System (ADS)

    Legbedji, Alexis Motto

    2002-04-01

    We consider the optimal coordination of a class of mathematical programs with equilibrium constraints, which is formally interpreted as a resource-allocation problem. Many decomposition techniques were proposed to circumvent the difficulty of solving large systems with limited computer resources. The considerable improvement in computer architecture has allowed the solution of large-scale problems with increasing speed. Consequently, interest in decomposition techniques has waned. Nonetheless, there is an important class of applications for which decomposition techniques will still be relevant, among others, distributed systems---the Internet, perhaps, being the most conspicuous example---and competitive economic systems. Conceptually, a competitive economic system is a collection of agents that have similar or different objectives while sharing the same system resources. In theory, constructing a large-scale mathematical program and solving it centrally, using currently available computing power can optimize such systems of agents. In practice, however, because agents are self-interested and not willing to reveal some sensitive corporate data, one cannot solve these kinds of coordination problems by simply maximizing the sum of agent's objective functions with respect to their constraints. An iterative price decomposition or Lagrangian dual method is considered best suited because it can operate with limited information. A price-directed strategy, however, can only work successfully when coordinating or equilibrium prices exist, which is not generally the case when a weak duality is unavoidable. Showing when such prices exist and how to compute them is the main subject of this thesis. Among our results, we show that, if the Lagrangian function of a primal program is additively separable, price schedules coordination may be attained. The prices are Lagrange multipliers, and are also the decision variables of a dual program. In addition, we propose a new form of augmented or nonlinear pricing, which is an example of the use of penalty functions in mathematical programming. Applications are drawn from mathematical programming problems of the form arising in electric power system scheduling under competition.

  18. Mathematical optimization of high dose-rate brachytherapy—derivation of a linear penalty model from a dose-volume model

    NASA Astrophysics Data System (ADS)

    Morén, B.; Larsson, T.; Carlsson Tedgren, Å.

    2018-03-01

    High dose-rate brachytherapy is a method for cancer treatment where the radiation source is placed within the body, inside or close to a tumour. For dose planning, mathematical optimization techniques are being used in practice and the most common approach is to use a linear model which penalizes deviations from specified dose limits for the tumour and for nearby organs. This linear penalty model is easy to solve, but its weakness lies in the poor correlation of its objective value and the dose-volume objectives that are used clinically to evaluate dose distributions. Furthermore, the model contains parameters that have no clear clinical interpretation. Another approach for dose planning is to solve mixed-integer optimization models with explicit dose-volume constraints which include parameters that directly correspond to dose-volume objectives, and which are therefore tangible. The two mentioned models take the overall goals for dose planning into account in fundamentally different ways. We show that there is, however, a mathematical relationship between them by deriving a linear penalty model from a dose-volume model. This relationship has not been established before and improves the understanding of the linear penalty model. In particular, the parameters of the linear penalty model can be interpreted as dual variables in the dose-volume model.

  19. Cost minimizing of cutting process for CNC thermal and water-jet machines

    NASA Astrophysics Data System (ADS)

    Tavaeva, Anastasia; Kurennov, Dmitry

    2015-11-01

    This paper deals with optimization problem of cutting process for CNC thermal and water-jet machines. The accuracy of objective function parameters calculation for optimization problem is investigated. This paper shows that working tool path speed is not constant value. One depends on some parameters that are described in this paper. The relations of working tool path speed depending on the numbers of NC programs frames, length of straight cut, configuration part are presented. Based on received results the correction coefficients for working tool speed are defined. Additionally the optimization problem may be solved by using mathematical model. Model takes into account the additional restrictions of thermal cutting (choice of piercing and output tool point, precedence condition, thermal deformations). At the second part of paper the non-standard cutting techniques are considered. Ones may lead to minimizing of cutting cost and time compared with standard cutting techniques. This paper considers the effectiveness of non-standard cutting techniques application. At the end of the paper the future research works are indicated.

  20. Mathematical calibration procedure of a capacitive sensor-based indexed metrology platform

    NASA Astrophysics Data System (ADS)

    Brau-Avila, A.; Santolaria, J.; Acero, R.; Valenzuela-Galvan, M.; Herrera-Jimenez, V. M.; Aguilar, J. J.

    2017-03-01

    The demand for faster and more reliable measuring tasks for the control and quality assurance of modern production systems has created new challenges for the field of coordinate metrology. Thus, the search for new solutions in coordinate metrology systems and the need for the development of existing ones still persists. One example of such a system is the portable coordinate measuring machine (PCMM), the use of which in industry has considerably increased in recent years, mostly due to its flexibility for accomplishing in-line measuring tasks as well as its reduced cost and operational advantages compared to traditional coordinate measuring machines. Nevertheless, PCMMs have a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification and optimization procedures. In this work the mathematical calibration procedure of a capacitive sensor-based indexed metrology platform (IMP) is presented. This calibration procedure is based on the readings and geometric features of six capacitive sensors and their targets with nanometer resolution. The final goal of the IMP calibration procedure is to optimize the geometric features of the capacitive sensors and their targets in order to use the optimized data in the verification procedures of PCMMs.

  1. Elaine Hale | NREL

    Science.gov Websites

    Analysis Center. Areas of Expertise Mathematical modeling, simulation, and optimization of complex Industrial and Applied Mathematics Mathematical Optimization Society Featured Publications Stoll, Brady

  2. Wood transportation systems-a spin-off of a computerized information and mapping technique

    Treesearch

    William W. Phillips; Thomas J. Corcoran

    1978-01-01

    A computerized mapping system originally developed for planning the control of the spruce budworm in Maine has been extended into a tool for planning road net-work development and optimizing transportation costs. A budgetary process and a mathematical linear programming routine are used interactively with the mapping and information retrieval capabilities of the system...

  3. A Mathematical Model for Allocation of School Resources to Optimize a Selected Output.

    ERIC Educational Resources Information Center

    McAfee, Jackson K.

    The methodology of costing an education program by identifying the resources it utilizes places all costs within the framework of staff, equipment, materials, facilities, and services. This paper suggests that this methodology is much stronger than the more traditional budgetary and cost per pupil approach. The techniques of data collection are…

  4. Manufacture of conical springs with elastic medium technology improvement

    NASA Astrophysics Data System (ADS)

    Kurguzov, S. A.; Mikhailova, U. V.; Kalugina, O. B.

    2018-01-01

    This article considers the manufacturing technology improvement by using an elastic medium in the stamping tool forming space to improve the conical springs performance characteristics and reduce the costs of their production. Estimation technique of disk spring operational properties is developed by mathematical modeling of the compression process during the operation of a spring. A technique for optimizing the design parameters of a conical spring is developed, which ensures a minimum voltage value when operated in the edge of the spring opening.

  5. Multidimensional scaling for evolutionary algorithms--visualization of the path through search space and solution space using Sammon mapping.

    PubMed

    Pohlheim, Hartmut

    2006-01-01

    Multidimensional scaling as a technique for the presentation of high-dimensional data with standard visualization techniques is presented. The technique used is often known as Sammon mapping. We explain the mathematical foundations of multidimensional scaling and its robust calculation. We also demonstrate the use of this technique in the area of evolutionary algorithms. First, we present the visualization of the path through the search space of the best individuals during an optimization run. We then apply multidimensional scaling to the comparison of multiple runs regarding the variables of individuals and multi-criteria objective values (path through the solution space).

  6. Current advancements and challenges in soil-root interactions modelling

    NASA Astrophysics Data System (ADS)

    Schnepf, Andrea; Huber, Katrin; Abesha, Betiglu; Meunier, Felicien; Leitner, Daniel; Roose, Tiina; Javaux, Mathieu; Vanderborght, Jan; Vereecken, Harry

    2015-04-01

    Roots change their surrounding soil chemically, physically and biologically. This includes changes in soil moisture and solute concentration, the exudation of organic substances into the rhizosphere, increased growth of soil microorganisms, or changes in soil structure. The fate of water and solutes in the root zone is highly determined by these root-soil interactions. Mathematical models of soil-root systems in combination with non-invasive techniques able to characterize root systems are a promising tool to understand and predict the behaviour of water and solutes in the root zone. With respect to different fields of applications, predictive mathematical models can contribute to the solution of optimal control problems in plant recourse efficiency. This may result in significant gains in productivity, efficiency and environmental sustainability in various land use activities. Major challenges include the coupling of model parameters of the relevant processes with the surrounding environment such as temperature, nutrient concentration or soil water content. A further challenge is the mathematical description of the different spatial and temporal scales involved. This includes in particular the branched structures formed by root systems or the external mycelium of mycorrhizal fungi. Here, reducing complexity as well as bridging between spatial scales is required. Furthermore, the combination of experimental and mathematical techniques may advance the field enormously. Here, the use of root system, soil and rhizosphere models is presented through a number of modelling case studies, including image based modelling of phosphate uptake by a root with hairs, model-based optimization of root architecture for phosphate uptake from soil, upscaling of rhizosphere models, modelling root growth in structured soil, and the effect of root hydraulic architecture on plant water uptake efficiency and drought resistance.

  7. Current Advancements and Challenges in Soil-Root Interactions Modelling

    NASA Astrophysics Data System (ADS)

    Schnepf, A.; Huber, K.; Abesha, B.; Meunier, F.; Leitner, D.; Roose, T.; Javaux, M.; Vanderborght, J.; Vereecken, H.

    2014-12-01

    Roots change their surrounding soil chemically, physically and biologically. This includes changes in soil moisture and solute concentration, the exudation of organic substances into the rhizosphere, increased growth of soil microorganisms, or changes in soil structure. The fate of water and solutes in the root zone is highly determined by these root-soil interactions. Mathematical models of soil-root systems in combination with non-invasive techniques able to characterize root systems are a promising tool to understand and predict the behaviour of water and solutes in the root zone. With respect to different fields of applications, predictive mathematical models can contribute to the solution of optimal control problems in plant recourse efficiency. This may result in significant gains in productivity, efficiency and environmental sustainability in various land use activities. Major challenges include the coupling of model parameters of the relevant processes with the surrounding environment such as temperature, nutrient concentration or soil water content. A further challenge is the mathematical description of the different spatial and temporal scales involved. This includes in particular the branched structures formed by root systems or the external mycelium of mycorrhizal fungi. Here, reducing complexity as well as bridging between spatial scales is required. Furthermore, the combination of experimental and mathematical techniques may advance the field enormously. Here, the use of root system, soil and rhizosphere models is presented through a number of modelling case studies, including image based modelling of phosphate uptake by a root with hairs, model-based optimization of root architecture for phosphate uptake from soil, upscaling of rhizosphere models, modelling root growth in structured soil, and the effect of root hydraulic architecture on plant water uptake efficiency and drought resistance.

  8. Discrete optimal control approach to a four-dimensional guidance problem near terminal areas

    NASA Technical Reports Server (NTRS)

    Nagarajan, N.

    1974-01-01

    Description of a computer-oriented technique to generate the necessary control inputs to guide an aircraft in a given time from a given initial state to a prescribed final state subject to the constraints on airspeed, acceleration, and pitch and bank angles of the aircraft. A discrete-time mathematical model requiring five state variables and three control variables is obtained, assuming steady wind and zero sideslip. The guidance problem is posed as a discrete nonlinear optimal control problem with a cost functional of Bolza form. A solution technique for the control problem is investigated, and numerical examples are presented. It is believed that this approach should prove to be useful in automated air traffic control schemes near large terminal areas.

  9. Optimal pacing for running 400- and 800-m track races

    NASA Astrophysics Data System (ADS)

    Reardon, James

    2013-06-01

    We present a toy model of anaerobic glycolysis that utilizes appropriate physiological and mathematical consideration while remaining useful to the athlete. The toy model produces an optimal pacing strategy for 400-m and 800-m races that is analytically calculated via the Euler-Lagrange equation. The calculation of the optimum v(t) is presented in detail, with an emphasis on intuitive arguments in order to serve as a bridge between the basic techniques presented in undergraduate physics textbooks and the more advanced techniques of control theory. Observed pacing strategies in 400-m and 800-m world-record races are found to be well-fit by the toy model, which allows us to draw a new physiological interpretation for the advantages of common weight-training practices.

  10. Decision Support Requirements in a Unified Life Cycle Engineering (ULCE) Environment. Volume 2. Conceptual Approaches to Optimization.

    DTIC Science & Technology

    1988-05-01

    the meet ehidmli i thm e mpesm of rmbrme pap Ii bprmaeIea s, IDA Mwmaim Ampad le eI.te umm emOw casm d One IqIammeis er~ wh eMA ls is mmidsmwkdMle...in turn, is controlled by the units above it. Dynamic programming is a mathematical technique well suited for optimization of multistage models. This...interval to a desired accuracy. Several region elimination methods have been discussed in the literature, including the Golden Section, Fibonacci

  11. A Problem on Optimal Transportation

    ERIC Educational Resources Information Center

    Cechlarova, Katarina

    2005-01-01

    Mathematical optimization problems are not typical in the classical curriculum of mathematics. In this paper we show how several generalizations of an easy problem on optimal transportation were solved by gifted secondary school pupils in a correspondence mathematical seminar, how they can be used in university courses of linear programming and…

  12. Mathematical improvement of the Hopfield model for feasible solutions to the traveling salesman problem by a synapse dynamical system.

    PubMed

    Takahashi, Y

    1998-01-01

    It is well known that the Hopfield Model (HM) for neural networks to solve the Traveling Salesman Problem (TSP) suffers from three major drawbacks. (1) It can converge on nonoptimal locally minimum solutions. (2) It can converge on infeasible solutions. (3) Results are very sensitive to the careful tuning of its parameters. A number of methods have been proposed to overcome (a) well. In contrast, work on (b) and (c) has not been sufficient; techniques have not been generalized to more general optimization problems. Thus this paper mathematically resolves (b) and (c) to such an extent that the resolution can be applied to solving with some general network continuous optimization problems including the Hopfield version of the TSP. It first constructs an Extended HM (E-HM) that overcomes both (b) and (c). Fundamental techniques of the E-HM lie in the addition of a synapse dynamical system cooperated with the current HM unit dynamical system. It is this synapse dynamical system that makes the TSP constraint hold at any final states for whatever choices of the IIM parameters and an initial state. The paper then generalizes the E-HM further to a network that can solve a class of continuous optimization problems with a constraint equation where both of the objective function and the constraint function are nonnegative and continuously differentiable.

  13. Power-limited low-thrust trajectory optimization with operation point detection

    NASA Astrophysics Data System (ADS)

    Chi, Zhemin; Li, Haiyang; Jiang, Fanghua; Li, Junfeng

    2018-06-01

    The power-limited solar electric propulsion system is considered more practical in mission design. An accurate mathematical model of the propulsion system, based on experimental data of the power generation system, is used in this paper. An indirect method is used to deal with the time-optimal and fuel-optimal control problems, in which the solar electric propulsion system is described using a finite number of operation points, which are characterized by different pairs of thruster input power. In order to guarantee the integral accuracy for the discrete power-limited problem, a power operation detection technique is embedded in the fourth-order Runge-Kutta algorithm with fixed step. Moreover, the logarithmic homotopy method and normalization technique are employed to overcome the difficulties caused by using indirect methods. Three numerical simulations with actual propulsion systems are given to substantiate the feasibility and efficiency of the proposed method.

  14. A New Stochastic Technique for Painlevé Equation-I Using Neural Network Optimized with Swarm Intelligence

    PubMed Central

    Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor

    2012-01-01

    A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371

  15. Inverse treatment planning for spinal robotic radiosurgery: an international multi-institutional benchmark trial.

    PubMed

    Blanck, Oliver; Wang, Lei; Baus, Wolfgang; Grimm, Jimm; Lacornerie, Thomas; Nilsson, Joakim; Luchkovskyi, Sergii; Cano, Isabel Palazon; Shou, Zhenyu; Ayadi, Myriam; Treuer, Harald; Viard, Romain; Siebert, Frank-Andre; Chan, Mark K H; Hildebrandt, Guido; Dunst, Jürgen; Imhoff, Detlef; Wurster, Stefan; Wolff, Robert; Romanelli, Pantaleo; Lartigau, Eric; Semrau, Robert; Soltys, Scott G; Schweikard, Achim

    2016-05-08

    Stereotactic radiosurgery (SRS) is the accurate, conformal delivery of high-dose radiation to well-defined targets while minimizing normal structure doses via steep dose gradients. While inverse treatment planning (ITP) with computerized optimization algorithms are routine, many aspects of the planning process remain user-dependent. We performed an international, multi-institutional benchmark trial to study planning variability and to analyze preferable ITP practice for spinal robotic radiosurgery. 10 SRS treatment plans were generated for a complex-shaped spinal metastasis with 21 Gy in 3 fractions and tight constraints for spinal cord (V14Gy < 2 cc, V18Gy < 0.1 cc) and target (coverage > 95%). The resulting plans were rated on a scale from 1 to 4 (excellent-poor) in five categories (constraint compliance, optimization goals, low-dose regions, ITP complexity, and clinical acceptability) by a blinded review panel. Additionally, the plans were mathemati-cally rated based on plan indices (critical structure and target doses, conformity, monitor units, normal tissue complication probability, and treatment time) and compared to the human rankings. The treatment plans and the reviewers' rankings varied substantially among the participating centers. The average mean overall rank was 2.4 (1.2-4.0) and 8/10 plans were rated excellent in at least one category by at least one reviewer. The mathematical rankings agreed with the mean overall human rankings in 9/10 cases pointing toward the possibility for sole mathematical plan quality comparison. The final rankings revealed that a plan with a well-balanced trade-off among all planning objectives was preferred for treatment by most par-ticipants, reviewers, and the mathematical ranking system. Furthermore, this plan was generated with simple planning techniques. Our multi-institutional planning study found wide variability in ITP approaches for spinal robotic radiosurgery. The participants', reviewers', and mathematical match on preferable treatment plans and ITP techniques indicate that agreement on treatment planning and plan quality can be reached for spinal robotic radiosurgery.

  16. Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.

    PubMed

    Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T

    2015-03-01

    It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.

  17. Existence and characterization of optimal control in mathematics model of diabetics population

    NASA Astrophysics Data System (ADS)

    Permatasari, A. H.; Tjahjana, R. H.; Udjiani, T.

    2018-03-01

    Diabetes is a chronic disease with a huge burden affecting individuals and the whole society. In this paper, we constructed the optimal control mathematical model by applying a strategy to control the development of diabetic population. The constructed mathematical model considers the dynamics of disabled people due to diabetes. Moreover, an optimal control approach is proposed in order to reduce the burden of pre-diabetes. Implementation of control is done by preventing the pre-diabetes develop into diabetics with and without complications. The existence of optimal control and characterization of optimal control is discussed in this paper. Optimal control is characterized by applying the Pontryagin minimum principle. The results indicate that there is an optimal control in optimization problem in mathematics model of diabetic population. The effect of the optimal control variable (prevention) is strongly affected by the number of healthy people.

  18. Valuing hydrological alteration in multi-objective water resources management

    NASA Astrophysics Data System (ADS)

    Bizzi, Simone; Pianosi, Francesca; Soncini-Sessa, Rodolfo

    2012-11-01

    SummaryThe management of water through the impoundment of rivers by dams and reservoirs is necessary to support key human activities such as hydropower production, agriculture and flood risk mitigation. Advances in multi-objective optimization techniques and ever growing computing power make it possible to design reservoir operating policies that represent Pareto-optimal tradeoffs between multiple interests. On the one hand, such optimization methods can enhance performances of commonly targeted objectives (such as hydropower production or water supply), on the other hand they risk strongly penalizing all the interests not directly (i.e. mathematically) included in the optimization algorithm. The alteration of the downstream hydrological regime is a well established cause of ecological degradation and its evaluation and rehabilitation is commonly required by recent legislation (as the Water Framework Directive in Europe). However, it is rarely embedded in reservoir optimization routines and, even when explicitly considered, the criteria adopted for its evaluation are doubted and not commonly trusted, undermining the possibility of real implementation of environmentally friendly policies. The main challenges in defining and assessing hydrological alterations are: how to define a reference state (referencing); how to define criteria upon which to build mathematical indicators of alteration (measuring); and finally how to aggregate the indicators in a single evaluation index (valuing) that can serve as objective function in the optimization problem. This paper aims to address these issues by: (i) discussing the benefits and constrains of different approaches to referencing, measuring and valuing hydrological alteration; (ii) testing two alternative indices of hydrological alteration, one based on the established framework of Indicators of Hydrological Alteration (Richter et al., 1996), and one satisfying the mathematical properties required by widely used optimization methods based on dynamic programming; (iii) demonstrating and discussing these indices by application River Ticino, in Italy; (iv) providing a framework to effectively include hydrological alteration within reservoir operation optimization.

  19. Optimization techniques for integrating spatial data

    USGS Publications Warehouse

    Herzfeld, U.C.; Merriam, D.F.

    1995-01-01

    Two optimization techniques ta predict a spatial variable from any number of related spatial variables are presented. The applicability of the two different methods for petroleum-resource assessment is tested in a mature oil province of the Midcontinent (USA). The information on petroleum productivity, usually not directly accessible, is related indirectly to geological, geophysical, petrographical, and other observable data. This paper presents two approaches based on construction of a multivariate spatial model from the available data to determine a relationship for prediction. In the first approach, the variables are combined into a spatial model by an algebraic map-comparison/integration technique. Optimal weights for the map comparison function are determined by the Nelder-Mead downhill simplex algorithm in multidimensions. Geologic knowledge is necessary to provide a first guess of weights to start the automatization, because the solution is not unique. In the second approach, active set optimization for linear prediction of the target under positivity constraints is applied. Here, the procedure seems to select one variable from each data type (structure, isopachous, and petrophysical) eliminating data redundancy. Automating the determination of optimum combinations of different variables by applying optimization techniques is a valuable extension of the algebraic map-comparison/integration approach to analyzing spatial data. Because of the capability of handling multivariate data sets and partial retention of geographical information, the approaches can be useful in mineral-resource exploration. ?? 1995 International Association for Mathematical Geology.

  20. Multi-Objective Optimization of Friction Stir Welding Process Parameters of AA6061-T6 and AA7075-T6 Using a Biogeography Based Optimization Algorithm

    PubMed Central

    Tamjidy, Mehran; Baharudin, B. T. Hang Tuah; Paslar, Shahla; Matori, Khamirul Amin; Sulaiman, Shamsuddin; Fadaeifard, Firouz

    2017-01-01

    The development of Friction Stir Welding (FSW) has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ), a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS) and Shannon’s entropy. PMID:28772893

  1. Multi-Objective Optimization of Friction Stir Welding Process Parameters of AA6061-T6 and AA7075-T6 Using a Biogeography Based Optimization Algorithm.

    PubMed

    Tamjidy, Mehran; Baharudin, B T Hang Tuah; Paslar, Shahla; Matori, Khamirul Amin; Sulaiman, Shamsuddin; Fadaeifard, Firouz

    2017-05-15

    The development of Friction Stir Welding (FSW) has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ), a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS) and Shannon's entropy.

  2. A simple technique to increase profits in wood products marketing

    Treesearch

    George B. Harpole

    1971-01-01

    Mathematical models can be used to solve quickly some simple day-to-day marketing problems. This note explains how a sawmill production manager, who has an essentially fixed-capacity mill, can solve several optimization problems by using pencil and paper, a forecast of market prices, and a simple algorithm. One such problem is to maximize profits in an operating period...

  3. Description of Student’s Metacognitive Ability in Understanding and Solving Mathematics Problem

    NASA Astrophysics Data System (ADS)

    Ahmad, Herlina; Febryanti, Fatimah; Febryanti, Fatimah; Muthmainnah

    2018-01-01

    This research was conducted qualitative which was aim to describe metacognitive ability to understand and solve the problems of mathematics. The subject of the research was the first year students at computer and networking department of SMK Mega Link Majene. The sample was taken by purposive sampling technique. The data obtained used the research instrument based on the form of students achievements were collected by using test of student’s achievement and interview guidance. The technique of collecting data researcher had observation to ascertain the model that used by teacher was teaching model of developing metacognitive. The technique of data analysis in this research was reduction data, presentation and conclusion. Based on the whole findings in this study it was shown that student’s metacognitive ability generally not develops optimally. It was because of limited scope of the materials, and cognitive teaching strategy handled by verbal presentation and trained continuously in facing cognitive tasks, such as understanding and solving problem.

  4. Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.

  5. New efficient optimizing techniques for Kalman filters and numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis

    2016-06-01

    The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.

  6. TREATMENT OF LANDFILL LEACHATE BY COUPLING COAGULATION-FLOCCULATION OR OZONATION TO GRANULAR ACTIVATED CARBON ADSORPTION.

    PubMed

    Oloibiri, Violet; Ufomba, Innocent; Chys, Michael; Audenaert, Wim; Demeestere, Kristof; Van Hulle, Stijn W H

    2015-01-01

    A major concern for landfilling facilities is the treatment of their leachate. To optimize organic matter removal from this leachate, the combination of two or more techniques is preferred in order to meet stringent effluent standards. In our study, coagulation-flocculation and ozonation are compared as pre- treatment steps for stabilized landfill leachate prior to granular activated carbon (GAC) adsorption. The efficiency of the pre treatment techniques is evaluated using COD and UVA254 measurements. For coagulation- flocculation, different chemicals are compared and optimal dosages are determined. After this, iron (III) chloride is selected for subsequent adsorption studies due to its high percentage of COD and UVA254 removal and good sludge settle-ability. Our finding show that ozonation as a single treatment is effective in reducing COD in landfill leachate by 66% compared to coagulation flocculation (33%). Meanwhile, coagulation performs better in UVA254 reduction than ozonation. Subsequent GAC adsorption of ozonated effluent, coagulated effluent and untreated leachate resulted in 77%, 53% and 8% total COD removal respectively (after 6 bed volumes). The effect of the pre-treatment techniques on GAC adsorption properties is evaluated experimentally and mathematically using Thomas and Yoon-Nelson models. Mathematical modelling of the experimental GAC adsorption data shows that ozonation increases the adsorption capacity and break through time with a factor of 2.5 compared to coagulation-flocculation.

  7. Quantitative model analysis with diverse biological data: applications in developmental pattern formation.

    PubMed

    Pargett, Michael; Umulis, David M

    2013-07-15

    Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Review on applications of artificial intelligence methods for dam and reservoir-hydro-environment models.

    PubMed

    Allawi, Mohammed Falah; Jaafar, Othman; Mohamad Hamzah, Firdaus; Abdullah, Sharifah Mastura Syed; El-Shafie, Ahmed

    2018-05-01

    Efficacious operation for dam and reservoir system could guarantee not only a defenselessness policy against natural hazard but also identify rule to meet the water demand. Successful operation of dam and reservoir systems to ensure optimal use of water resources could be unattainable without accurate and reliable simulation models. According to the highly stochastic nature of hydrologic parameters, developing accurate predictive model that efficiently mimic such a complex pattern is an increasing domain of research. During the last two decades, artificial intelligence (AI) techniques have been significantly utilized for attaining a robust modeling to handle different stochastic hydrological parameters. AI techniques have also shown considerable progress in finding optimal rules for reservoir operation. This review research explores the history of developing AI in reservoir inflow forecasting and prediction of evaporation from a reservoir as the major components of the reservoir simulation. In addition, critical assessment of the advantages and disadvantages of integrated AI simulation methods with optimization methods has been reported. Future research on the potential of utilizing new innovative methods based AI techniques for reservoir simulation and optimization models have also been discussed. Finally, proposal for the new mathematical procedure to accomplish the realistic evaluation of the whole optimization model performance (reliability, resilience, and vulnerability indices) has been recommended.

  9. Optimization strategies based on sequential quadratic programming applied for a fermentation process for butanol production.

    PubMed

    Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens

    2009-11-01

    In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.

  10. Modal analysis and control of flexible manipulator arms. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Neto, O. M.

    1974-01-01

    The possibility of modeling and controlling flexible manipulator arms was examined. A modal approach was used for obtaining the mathematical model and control techniques. The arm model was represented mathematically by a state space description defined in terms of joint angles and mode amplitudes obtained from truncation on the distributed systems, and included the motion of a two link two joint arm. Three basic techniques were used for controlling the system: pole allocation with gains obtained from the rigid system with interjoint feedbacks, Simon-Mitter algorithm for pole allocation, and sensitivity analysis with respect to parameter variations. An improvement in arm bandwidth was obtained. Optimization of some geometric parameters was undertaken to maximize bandwidth for various payload sizes and programmed tasks. The controlled system is examined under constant gains and using the nonlinear model for simulations following a time varying state trajectory.

  11. Optimal mistuning for enhanced aeroelastic stability of transonic fans

    NASA Technical Reports Server (NTRS)

    Hall, K. C.; Crawley, E. F.

    1983-01-01

    An inverse design procedure was developed for the design of a mistuned rotor. The design requirements are that the stability margin of the eigenvalues of the aeroelastic system be greater than or equal to some minimum stability margin, and that the mass added to each blade be positive. The objective was to achieve these requirements with a minimal amount of mistuning. Hence, the problem was posed as a constrained optimization problem. The constrained minimization problem was solved by the technique of mathematical programming via augmented Lagrangians. The unconstrained minimization phase of this technique was solved by the variable metric method. The bladed disk was modelled as being composed of a rigid disk mounted on a rigid shaft. Each of the blades were modelled with a single tosional degree of freedom.

  12. Superstructure-based Design and Optimization of Batch Biodiesel Production Using Heterogeneous Catalysts

    NASA Astrophysics Data System (ADS)

    Nuh, M. Z.; Nasir, N. F.

    2017-08-01

    Biodiesel as a fuel comprised of mono alkyl esters of long chain fatty acids derived from renewable lipid feedstock, such as vegetable oil and animal fat. Biodiesel production is complex process which need systematic design and optimization. However, no case study using the process system engineering (PSE) elements which are superstructure optimization of batch process, it involves complex problems and uses mixed-integer nonlinear programming (MINLP). The PSE offers a solution to complex engineering system by enabling the use of viable tools and techniques to better manage and comprehend the complexity of the system. This study is aimed to apply the PSE tools for the simulation of biodiesel process and optimization and to develop mathematical models for component of the plant for case A, B, C by using published kinetic data. Secondly, to determine economic analysis for biodiesel production, focusing on heterogeneous catalyst. Finally, the objective of this study is to develop the superstructure for biodiesel production by using heterogeneous catalyst. The mathematical models are developed by the superstructure and solving the resulting mixed integer non-linear model and estimation economic analysis by using MATLAB software. The results of the optimization process with the objective function of minimizing the annual production cost by batch process from case C is 23.2587 million USD. Overall, the implementation a study of process system engineering (PSE) has optimized the process of modelling, design and cost estimation. By optimizing the process, it results in solving the complex production and processing of biodiesel by batch.

  13. Optimization of Thermal Object Nonlinear Control Systems by Energy Efficiency Criterion.

    NASA Astrophysics Data System (ADS)

    Velichkin, Vladimir A.; Zavyalov, Vladimir A.

    2018-03-01

    This article presents the results of thermal object functioning control analysis (heat exchanger, dryer, heat treatment chamber, etc.). The results were used to determine a mathematical model of the generalized thermal control object. The appropriate optimality criterion was chosen to make the control more energy-efficient. The mathematical programming task was formulated based on the chosen optimality criterion, control object mathematical model and technological constraints. The “maximum energy efficiency” criterion helped avoid solving a system of nonlinear differential equations and solve the formulated problem of mathematical programming in an analytical way. It should be noted that in the case under review the search for optimal control and optimal trajectory reduces to solving an algebraic system of equations. In addition, it is shown that the optimal trajectory does not depend on the dynamic characteristics of the control object.

  14. Space shuttle propulsion parameter estimation using optimal estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.

  15. Real Time Optimal Control of Supercapacitor Operation for Frequency Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Yusheng; Panwar, Mayank; Mohanpurkar, Manish

    2016-07-01

    Supercapacitors are gaining wider applications in power systems due to fast dynamic response. Utilizing supercapacitors by means of power electronics interfaces for power compensation is a proven effective technique. For applications such as requency restoration if the cost of supercapacitors maintenance as well as the energy loss on the power electronics interfaces are addressed. It is infeasible to use traditional optimization control methods to mitigate the impacts of frequent cycling. This paper proposes a Front End Controller (FEC) using Generalized Predictive Control featuring real time receding optimization. The optimization constraints are based on cost and thermal management to enhance tomore » the utilization efficiency of supercapacitors. A rigorous mathematical derivation is conducted and test results acquired from Digital Real Time Simulator are provided to demonstrate effectiveness.« less

  16. Design and statistical optimization of glipizide loaded lipospheres using response surface methodology.

    PubMed

    Shivakumar, Hagalavadi Nanjappa; Patel, Pragnesh Bharat; Desai, Bapusaheb Gangadhar; Ashok, Purnima; Arulmozhi, Sinnathambi

    2007-09-01

    A 32 factorial design was employed to produce glipizide lipospheres by the emulsification phase separation technique using paraffin wax and stearic acid as retardants. The effect of critical formulation variables, namely levels of paraffin wax (X1) and proportion of stearic acid in the wax (X2) on geometric mean diameter (dg), percent encapsulation efficiency (% EE), release at the end of 12 h (rel12) and time taken for 50% of drug release (t50), were evaluated using the F-test. Mathematical models containing only the significant terms were generated for each response parameter using the multiple linear regression analysis (MLRA) and analysis of variance (ANOVA). Both formulation variables studied exerted a significant influence (p < 0.05) on the response parameters. Numerical optimization using the desirability approach was employed to develop an optimized formulation by setting constraints on the dependent and independent variables. The experimental values of dg, % EE, rel12 and t50 values for the optimized formulation were found to be 57.54 +/- 1.38 mum, 86.28 +/- 1.32%, 77.23 +/- 2.78% and 5.60 +/- 0.32 h, respectively, which were in close agreement with those predicted by the mathematical models. The drug release from lipospheres followed first-order kinetics and was characterized by the Higuchi diffusion model. The optimized liposphere formulation developed was found to produce sustained anti-diabetic activity following oral administration in rats.

  17. Evaluating optimal therapy robustness by virtual expansion of a sample population, with a case study in cancer immunotherapy

    PubMed Central

    Barish, Syndi; Ochs, Michael F.; Sontag, Eduardo D.; Gevertz, Jana L.

    2017-01-01

    Cancer is a highly heterogeneous disease, exhibiting spatial and temporal variations that pose challenges for designing robust therapies. Here, we propose the VEPART (Virtual Expansion of Populations for Analyzing Robustness of Therapies) technique as a platform that integrates experimental data, mathematical modeling, and statistical analyses for identifying robust optimal treatment protocols. VEPART begins with time course experimental data for a sample population, and a mathematical model fit to aggregate data from that sample population. Using nonparametric statistics, the sample population is amplified and used to create a large number of virtual populations. At the final step of VEPART, robustness is assessed by identifying and analyzing the optimal therapy (perhaps restricted to a set of clinically realizable protocols) across each virtual population. As proof of concept, we have applied the VEPART method to study the robustness of treatment response in a mouse model of melanoma subject to treatment with immunostimulatory oncolytic viruses and dendritic cell vaccines. Our analysis (i) showed that every scheduling variant of the experimentally used treatment protocol is fragile (nonrobust) and (ii) discovered an alternative region of dosing space (lower oncolytic virus dose, higher dendritic cell dose) for which a robust optimal protocol exists. PMID:28716945

  18. Goldmann Tonometer Prism with an Optimized Error Correcting Applanation Surface.

    PubMed

    McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko; Schwiegerling, Jim

    2016-09-01

    We evaluate solutions for an applanating surface modification to the Goldmann tonometer prism, which substantially negates the errors due to patient variability in biomechanics. A modified Goldmann or correcting applanation tonometry surface (CATS) prism is presented which was optimized to minimize the intraocular pressure (IOP) error due to corneal thickness, stiffness, curvature, and tear film. Mathematical modeling with finite element analysis (FEA) and manometric IOP referenced cadaver eyes were used to optimize and validate the design. Mathematical modeling of the optimized CATS prism indicates an approximate 50% reduction in each of the corneal biomechanical and tear film errors. Manometric IOP referenced pressure in cadaveric eyes demonstrates substantial equivalence to GAT in nominal eyes with the CATS prism as predicted by modeling theory. A CATS modified Goldmann prism is theoretically able to significantly improve the accuracy of IOP measurement without changing Goldmann measurement technique or interpretation. Clinical validation is needed but the analysis indicates a reduction in CCT error alone to less than ±2 mm Hg using the CATS prism in 100% of a standard population compared to only 54% less than ±2 mm Hg error with the present Goldmann prism. This article presents an easily adopted novel approach and critical design parameters to improve the accuracy of a Goldmann applanating tonometer.

  19. Dwell time algorithm based on the optimization theory for magnetorheological finishing

    NASA Astrophysics Data System (ADS)

    Zhang, Yunfei; Wang, Yang; Wang, Yajun; He, Jianguo; Ji, Fang; Huang, Wen

    2010-10-01

    Magnetorheological finishing (MRF) is an advanced polishing technique capable of rapidly converging to the required surface figure. This process can deterministically control the amount of the material removed by varying a time to dwell at each particular position on the workpiece surface. The dwell time algorithm is one of the most important key techniques of the MRF. A dwell time algorithm based on the1 matrix equation and optimization theory was presented in this paper. The conventional mathematical model of the dwell time was transferred to a matrix equation containing initial surface error, removal function and dwell time function. The dwell time to be calculated was just the solution to the large, sparse matrix equation. A new mathematical model of the dwell time based on the optimization theory was established, which aims to minimize the 2-norm or ∞-norm of the residual surface error. The solution meets almost all the requirements of precise computer numerical control (CNC) without any need for extra data processing, because this optimization model has taken some polishing condition as the constraints. Practical approaches to finding a minimal least-squares solution and a minimal maximum solution are also discussed in this paper. Simulations have shown that the proposed algorithm is numerically robust and reliable. With this algorithm an experiment has been performed on the MRF machine developed by ourselves. After 4.7 minutes' polishing, the figure error of a flat workpiece with a 50 mm diameter is improved by PV from 0.191λ(λ = 632.8 nm) to 0.087λ and RMS 0.041λ to 0.010λ. This algorithm can be constructed to polish workpieces of all shapes including flats, spheres, aspheres, and prisms, and it is capable of improving the polishing figures dramatically.

  20. Modelling and Optimizing Mathematics Learning in Children

    ERIC Educational Resources Information Center

    Käser, Tanja; Busetto, Alberto Giovanni; Solenthaler, Barbara; Baschera, Gian-Marco; Kohn, Juliane; Kucian, Karin; von Aster, Michael; Gross, Markus

    2013-01-01

    This study introduces a student model and control algorithm, optimizing mathematics learning in children. The adaptive system is integrated into a computer-based training system for enhancing numerical cognition aimed at children with developmental dyscalculia or difficulties in learning mathematics. The student model consists of a dynamic…

  1. Optimum structural design with plate bending elements - A survey

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Prasad, B.

    1981-01-01

    A survey is presented of recently published papers in the field of optimum structural design of plates, largely with respect to the minimum-weight design of plates subject to such constraints as fundamental frequency maximization. It is shown that, due to the availability of powerful computers, the trend in optimum plate design is away from methods tailored to specific geometry and loads and toward methods that can be easily programmed for any kind of plate, such as finite element methods. A corresponding shift is seen in optimization from variational techniques to numerical optimization algorithms. Among the topics covered are fully stressed design and optimality criteria, mathematical programming, smooth and ribbed designs, design against plastic collapse, buckling constraints, and vibration constraints.

  2. Optimal parameter estimation with a fixed rate of abstention

    NASA Astrophysics Data System (ADS)

    Gendra, B.; Ronco-Bonvehi, E.; Calsamiglia, J.; Muñoz-Tapia, R.; Bagan, E.

    2013-07-01

    The problems of optimally estimating a phase, a direction, and the orientation of a Cartesian frame (or trihedron) with general pure states are addressed. Special emphasis is put on estimation schemes that allow for inconclusive answers or abstention. It is shown that such schemes enable drastic improvements, up to the extent of attaining the Heisenberg limit in some cases, and the required amount of abstention is quantified. A general mathematical framework to deal with the asymptotic limit of many qubits or large angular momentum is introduced and used to obtain analytical results for all the relevant cases under consideration. Parameter estimation with abstention is also formulated as a semidefinite programming problem, for which very efficient numerical optimization techniques exist.

  3. Mathematical and Numerical Techniques in Energy and Environmental Modeling

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Ewing, R. E.

    Mathematical models have been widely used to predict, understand, and optimize many complex physical processes, from semiconductor or pharmaceutical design to large-scale applications such as global weather models to astrophysics. In particular, simulation of environmental effects of air pollution is extensive. Here we address the need for using similar models to understand the fate and transport of groundwater contaminants and to design in situ remediation strategies. Three basic problem areas need to be addressed in the modeling and simulation of the flow of groundwater contamination. First, one obtains an effective model to describe the complex fluid/fluid and fluid/rock interactions that control the transport of contaminants in groundwater. This includes the problem of obtaining accurate reservoir descriptions at various length scales and modeling the effects of this heterogeneity in the reservoir simulators. Next, one develops accurate discretization techniques that retain the important physical properties of the continuous models. Finally, one develops efficient numerical solution algorithms that utilize the potential of the emerging computing architectures. We will discuss recent advances and describe the contribution of each of the papers in this book in these three areas. Keywords: reservoir simulation, mathematical models, partial differential equations, numerical algorithms

  4. The influence of optimism and pessimism on student achievement in mathematics

    NASA Astrophysics Data System (ADS)

    Yates, Shirley M.

    2002-11-01

    Students' causal attributions are not only fundamental motivational variables but are also critical motivators of their persistence in learning. Optimism, pessimism, and achievement in mathematics were measured in a sample of primary and lower secondary students on two occasions. Although achievement in mathematics was most strongly related to prior achievement and grade level, optimism and pessimism were significant factors. In particular, students with a more generally pessimistic outlook on life had a lower level of achievement in mathematics over time. Gender was not a significant factor in achievement. The implications of these findings are discussed.

  5. Inverse treatment planning for spinal robotic radiosurgery: an international multi‐institutional benchmark trial

    PubMed Central

    Wang, Lei; Baus, Wolfgang; Grimm, Jimm; Lacornerie, Thomas; Nilsson, Joakim; Luchkovskyi, Sergii; Cano, Isabel Palazon; Shou, Zhenyu; Ayadi, Myriam; Treuer, Harald; Viard, Romain; Siebert, Frank‐Andre; Chan, Mark K.H.; Hildebrandt, Guido; Dunst, Jürgen; Imhoff, Detlef; Wurster, Stefan; Wolff, Robert; Romanelli, Pantaleo; Lartigau, Eric; Semrau, Robert; Soltys, Scott G.; Schweikard, Achim

    2016-01-01

    Stereotactic radiosurgery (SRS) is the accurate, conformal delivery of high‐dose radiation to well‐defined targets while minimizing normal structure doses via steep dose gradients. While inverse treatment planning (ITP) with computerized optimization algorithms are routine, many aspects of the planning process remain user‐dependent. We performed an international, multi‐institutional benchmark trial to study planning variability and to analyze preferable ITP practice for spinal robotic radiosurgery. 10 SRS treatment plans were generated for a complex‐shaped spinal metastasis with 21 Gy in 3 fractions and tight constraints for spinal cord (V14Gy<2 cc, V18Gy<0.1 cc) and target (coverage >95%). The resulting plans were rated on a scale from 1 to 4 (excellent‐poor) in five categories (constraint compliance, optimization goals, low‐dose regions, ITP complexity, and clinical acceptability) by a blinded review panel. Additionally, the plans were mathematically rated based on plan indices (critical structure and target doses, conformity, monitor units, normal tissue complication probability, and treatment time) and compared to the human rankings. The treatment plans and the reviewers' rankings varied substantially among the participating centers. The average mean overall rank was 2.4 (1.2‐4.0) and 8/10 plans were rated excellent in at least one category by at least one reviewer. The mathematical rankings agreed with the mean overall human rankings in 9/10 cases pointing toward the possibility for sole mathematical plan quality comparison. The final rankings revealed that a plan with a well‐balanced trade‐off among all planning objectives was preferred for treatment by most participants, reviewers, and the mathematical ranking system. Furthermore, this plan was generated with simple planning techniques. Our multi‐institutional planning study found wide variability in ITP approaches for spinal robotic radiosurgery. The participants', reviewers', and mathematical match on preferable treatment plans and ITP techniques indicate that agreement on treatment planning and plan quality can be reached for spinal robotic radiosurgery. PACS number(s): 87.55.de PMID:27167291

  6. Representations in Problem Solving: A Case Study with Optimization Problems

    ERIC Educational Resources Information Center

    Villegas, Jose L.; Castro, Enrique; Gutierrez, Jose

    2009-01-01

    Introduction: Representations play an essential role in mathematical thinking. They favor the understanding of mathematical concepts and stimulate the development of flexible and versatile thinking in problem solving. Here our focus is on their use in optimization problems, a type of problem considered important in mathematics teaching and…

  7. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    NASA Astrophysics Data System (ADS)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  8. Generation of structural topologies using efficient technique based on sorted compliances

    NASA Astrophysics Data System (ADS)

    Mazur, Monika; Tajs-Zielińska, Katarzyna; Bochenek, Bogdan

    2018-01-01

    Topology optimization, although well recognized is still widely developed. It has gained recently more attention since large computational ability become available for designers. This process is stimulated simultaneously by variety of emerging, innovative optimization methods. It is observed that traditional gradient-based mathematical programming algorithms, in many cases, are replaced by novel and e cient heuristic methods inspired by biological, chemical or physical phenomena. These methods become useful tools for structural optimization because of their versatility and easy numerical implementation. In this paper engineering implementation of a novel heuristic algorithm for minimum compliance topology optimization is discussed. The performance of the topology generator is based on implementation of a special function utilizing information of compliance distribution within the design space. With a view to cope with engineering problems the algorithm has been combined with structural analysis system Ansys.

  9. Optimal pricing and marketing planning for deteriorating items.

    PubMed

    Moosavi Tabatabaei, Seyed Reza; Sadjadi, Seyed Jafar; Makui, Ahmad

    2017-01-01

    Optimal pricing and marketing planning plays an essential role in production decisions on deteriorating items. This paper presents a mathematical model for a three-level supply chain, which includes one producer, one distributor and one retailer. The proposed study considers the production of a deteriorating item where demand is influenced by price, marketing expenditure, quality of product and after-sales service expenditures. The proposed model is formulated as a geometric programming with 5 degrees of difficulty and the problem is solved using the recent advances in optimization techniques. The study is supported by several numerical examples and sensitivity analysis is performed to analyze the effects of the changes in different parameters on the optimal solution. The preliminary results indicate that with the change in parameters influencing on demand, inventory holding, inventory deteriorating and set-up costs change and also significantly affect total revenue.

  10. Determination of diffusion coefficient for released nanoparticles from developed gelatin/chitosan bilayered buccal films.

    PubMed

    Mahdizadeh Barzoki, Zahra; Emam-Djomeh, Zahra; Mortazavian, Elaheh; Rafiee-Tehrani, Niyousha; Behmadi, Homa; Rafiee-Tehrani, Morteza; Moosavi-Movahedi, Ali Akbar

    2018-06-01

    This study aims at the mathematical optimization by Box-Behnken statistical design, fabrication by ionic gelation technique and in vitro characterization of insulin nanoparticles containing thiolated N- dimethyl ethyl chitosan (DMEC-Cys) conjugate. Then Optimized insulin nanoparticles were loaded into the buccal film, and in-vitro drug release from films was investigated, and diffusion coefficient was predicted. The optimized nanoparticles were shown to have mean particle size diameter of 148nm, zeta potential of 15.5mV, PdI of 0.26 and AE of 97.56%. Cell viability after incubation with optimized nanoparticles and films were assessed using an MTT biochemical assay. In vitro release study, FTIR and cytotoxicity also indicated that nanoparticles made of this thiolated polymer are suitable candidates for oral insulin delivery. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Towards automating the discovery of certain innovative design principles through a clustering-based optimization technique

    NASA Astrophysics Data System (ADS)

    Bandaru, Sunith; Deb, Kalyanmoy

    2011-09-01

    In this article, a methodology is proposed for automatically extracting innovative design principles which make a system or process (subject to conflicting objectives) optimal using its Pareto-optimal dataset. Such 'higher knowledge' would not only help designers to execute the system better, but also enable them to predict how changes in one variable would affect other variables if the system has to retain its optimal behaviour. This in turn would help solve other similar systems with different parameter settings easily without the need to perform a fresh optimization task. The proposed methodology uses a clustering-based optimization technique and is capable of discovering hidden functional relationships between the variables, objective and constraint functions and any other function that the designer wishes to include as a 'basis function'. A number of engineering design problems are considered for which the mathematical structure of these explicit relationships exists and has been revealed by a previous study. A comparison with the multivariate adaptive regression splines (MARS) approach reveals the practicality of the proposed approach due to its ability to find meaningful design principles. The success of this procedure for automated innovization is highly encouraging and indicates its suitability for further development in tackling more complex design scenarios.

  12. A Case Study on the Application of a Structured Experimental Method for Optimal Parameter Design of a Complex Control System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.

  13. MDO can help resolve the designer's dilemma. [multidisciplinary design optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Tulinius, Jan R.

    1991-01-01

    Multidisciplinary design optimization (MDO) is presented as a rapidly growing body of methods, algorithms, and techniques that will provide a quantum jump in the effectiveness and efficiency of the quantitative side of design, and will turn that side into an environment in which the qualitative side can thrive. MDO borrows from CAD/CAM for graphic visualization of geometrical and numerical data, data base technology, and in computer software and hardware. Expected benefits from this methodology are a rational, mathematically consistent approach to hypersonic aircraft designs, designs pushed closer to the optimum, and a design process either shortened or leaving time available for different concepts to be explored.

  14. The Mathematics of Navigating the Solar System

    NASA Technical Reports Server (NTRS)

    Hintz, Gerald

    2000-01-01

    In navigating spacecraft throughout the solar system, the space navigator relies on three academic disciplines - optimization, estimation, and control - that work on mathematical models of the real world. Thus, the navigator determines the flight path that will consume propellant and other resources in an efficient manner, determines where the craft is and predicts where it will go, and transfers it onto the optimal trajectory that meets operational and mission constraints. Mission requirements, for example, demand that observational measurements be made with sufficient precision that relativity must be modeled in collecting and fitting (the estimation process) the data, and propagating the trajectory. Thousands of parameters are now determined in near real-time to model the gravitational forces acting on a spacecraft in the vicinity of an irregularly shaped body. Completing these tasks requires mathematical models, analyses, and processing techniques. Newton, Gauss, Lambert, Legendre, and others are justly famous for their contributions to the mathematics of these tasks. More recently, graduate students participated in research to update the gravity model of the Saturnian system, including higher order gravity harmonics, tidal effects, and the influence of the rings. This investigation was conducted for the Cassini project to incorporate new trajectory modeling features in the navigation software. The resulting trajectory model will be used in navigating the 4-year tour of the Saturnian satellites. Also, undergraduate students are determining the ephemerides (locations versus time) of asteroids that will be used as reference objects in navigating the New Millennium's Deep Space 1 spacecraft autonomously.

  15. Optimization Parameters of Air-conditioning and Heat Insulation Systems of a Pressurized Cabins of Long-distance Airplanes

    NASA Astrophysics Data System (ADS)

    Gusev, Sergey A.; Nikolaev, Vladimir N.

    2018-01-01

    The method for determination of an aircraft compartment thermal condition, based on a mathematical model of a compartment thermal condition was developed. Development of solution techniques for solving heat exchange direct and inverse problems and for determining confidence intervals of parametric identification estimations was carried out. The required performance of air-conditioning, ventilation systems and heat insulation depth of crew and passenger cabins were received.

  16. Towards a formal semantics for Ada 9X

    NASA Technical Reports Server (NTRS)

    Guaspari, David; Mchugh, John; Wolfgang, Polak; Saaltink, Mark

    1995-01-01

    The Ada 9X language precision team was formed during the revisions of Ada 83, with the goal of analyzing the proposed design, identifying problems, and suggesting improvements, through the use of mathematical models. This report defines a framework for formally describing Ada 9X, based on Kahn's 'natural semantics', and applies the framework to portions of the language. The proposals for exceptions and optimization freedoms are also analyzed, using a different technique.

  17. Boundary elements; Proceedings of the Fifth International Conference, Hiroshima, Japan, November 8-11, 1983

    NASA Astrophysics Data System (ADS)

    Brebbia, C. A.; Futagami, T.; Tanaka, M.

    The boundary-element method (BEM) in computational fluid and solid mechanics is examined in reviews and reports of theoretical studies and practical applications. Topics presented include the fundamental mathematical principles of BEMs, potential problems, EM-field problems, heat transfer, potential-wave problems, fluid flow, elasticity problems, fracture mechanics, plates and shells, inelastic problems, geomechanics, dynamics, industrial applications of BEMs, optimization methods based on the BEM, numerical techniques, and coupling.

  18. Reduction of shock induced noise in imperfectly expanded supersonic jets using convex optimization

    NASA Astrophysics Data System (ADS)

    Adhikari, Sam

    2007-11-01

    Imperfectly expanded jets generate screech noise. The imbalance between the backpressure and the exit pressure of the imperfectly expanded jets produce shock cells and expansion or compression waves from the nozzle. The instability waves and the shock cells interact to generate the screech sound. The mathematical model consists of cylindrical coordinate based full Navier-Stokes equations and large-eddy-simulation turbulence modeling. Analytical and computational analysis of the three-dimensional helical effects provide a model that relates several parameters with shock cell patterns, screech frequency and distribution of shock generation locations. Convex optimization techniques minimize the shock cell patterns and the instability waves. The objective functions are (convex) quadratic and the constraint functions are affine. In the quadratic optimization programs, minimization of the quadratic functions over a set of polyhedrons provides the optimal result. Various industry standard methods like regression analysis, distance between polyhedra, bounding variance, Markowitz optimization, and second order cone programming is used for Quadratic Optimization.

  19. Optimization of a new mathematical model for bacterial growth

    USDA-ARS?s Scientific Manuscript database

    The objective of this research is to optimize a new mathematical equation as a primary model to describe the growth of bacteria under constant temperature conditions. An optimization algorithm was used in combination with a numerical (Runge-Kutta) method to solve the differential form of the new gr...

  20. Strategies and trajectories of coral reef fish larvae optimizing self-recruitment.

    PubMed

    Irisson, Jean-Olivier; LeVan, Anselme; De Lara, Michel; Planes, Serge

    2004-03-21

    Like many marine organisms, most coral reef fishes have a dispersive larval phase. The fate of this phase is of great concern for their ecology as it may determine population demography and connectivity. As direct study of the larval phase is difficult, we tackle the question of dispersion from an opposite point of view and study self-recruitment. In this paper, we propose a mathematical model of the pelagic phase, parameterized by a limited number of factors (currents, predator and prey distributions, energy budgets) and which focuses on the behavioral response of the larvae to these factors. We evaluate optimal behavioral strategies of the larvae (i.e. strategies that maximize the probability of return to the natal reef) and examine the trajectories of dispersal that they induce. Mathematically, larval behavior is described by a controlled Markov process. A strategy induces a sequence, indexed by time steps, of "decisions" (e.g. looking for food, swimming in a given direction). Biological, physical and topographic constraints are captured through the transition probabilities and the sets of possible decisions. Optimal strategies are found by means of the so-called stochastic dynamic programming equation. A computer program is developed and optimal decisions and trajectories are numerically derived. We conclude that this technique can be considered as a good tool to represent plausible larval behaviors and that it has great potential in terms of theoretical investigations and also for field applications.

  1. A web-oriented software for the optimization of pooled experiments in NGS for detection of rare mutations.

    PubMed

    Evangelista, Daniela; Zuccaro, Antonio; Lančinskas, Algirdas; Žilinskas, Julius; Guarracino, Mario R

    2016-02-17

    The cost per patient of next generation sequencing for detection of rare mutations may be significantly reduced using pooled experiments. Recently, some techniques have been proposed for the planning of pooled experiments and for the optimal allocation of patients into pools. However, the lack of a user friendly resource for planning the design of pooled experiments forces the scientists to do frequent, complex and long computations. OPENDoRM is a powerful collection of novel mathematical algorithms usable via an intuitive graphical user interface. It enables researchers to speed up the planning of their routine experiments, as well as, to support scientists without specific bioinformatics expertises. Users can automatically carry out analysis in terms of costs associated with the optimal allocation of patients in pools. They are also able to choose between three distinct pooling mathematical methods, each of which also suggests the optimal configuration for the submitted experiment. Importantly, in order to keep track of the performed experiments, users can save and export the results of their experiments in standard tabular and charts contents. OPENDoRM is a freely available web-oriented application for the planning of pooled NGS experiments, available at: http://www-labgtp.na.icar.cnr.it/OPENDoRM. Its easy and intuitive graphical user interface enables researchers to plan theirs experiments using novel algorithms, and to interactively visualize the results.

  2. Optimization Technique With Sensitivity Analysis On Menu Scheduling For Boarding School Student Aged 13-18 Using “Sufahani-Ismail Algorithm”

    NASA Astrophysics Data System (ADS)

    Sudin, Azila M.; Sufahani, Suliadi

    2018-04-01

    Boarding school student aged 13-18 need to eat nutritious meals which contains proper calories, vitality and nutrients for appropriate development with a specific end goal to repair and upkeep the body tissues. Furthermore, it averts undesired diseases and contamination. Serving healthier food is a noteworthy stride towards accomplishing that goal. However, arranging a nutritious and balance menu manually is convoluted, wasteful and tedious. Therefore, the aim of this study is to develop a mathematical model with an optimization technique for menu scheduling that fulfill the whole supplement prerequisite for boarding school student, reduce processing time, minimize the budget and furthermore serve assortment type of food each day. It additionally gives the flexibility for the cook to choose any food to be considered in the beginning of the process and change any favored menu even after the ideal arrangement and optimal solution has been obtained. This is called sensitivity analysis. A recalculation procedure will be performed in light of the ideal arrangement and seven days menu was produced. The data was gathered from the Malaysian Ministry of Education and schools authorities. Menu arranging is a known optimization problem. Therefore Binary Programming alongside optimization technique and “Sufahani-Ismail Algorithm” were utilized to take care of this issue. In future, this model can be implemented to other menu problem, for example, for sports, endless disease patients, militaries, colleges, healing facilities and nursing homes.

  3. An opinion formation based binary optimization approach for feature selection

    NASA Astrophysics Data System (ADS)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  4. Optimal pricing and marketing planning for deteriorating items

    PubMed Central

    Moosavi Tabatabaei, Seyed Reza; Sadjadi, Seyed Jafar; Makui, Ahmad

    2017-01-01

    Optimal pricing and marketing planning plays an essential role in production decisions on deteriorating items. This paper presents a mathematical model for a three-level supply chain, which includes one producer, one distributor and one retailer. The proposed study considers the production of a deteriorating item where demand is influenced by price, marketing expenditure, quality of product and after-sales service expenditures. The proposed model is formulated as a geometric programming with 5 degrees of difficulty and the problem is solved using the recent advances in optimization techniques. The study is supported by several numerical examples and sensitivity analysis is performed to analyze the effects of the changes in different parameters on the optimal solution. The preliminary results indicate that with the change in parameters influencing on demand, inventory holding, inventory deteriorating and set-up costs change and also significantly affect total revenue. PMID:28306750

  5. Design of an optimal preview controller for linear discrete-time descriptor systems with state delay

    NASA Astrophysics Data System (ADS)

    Cao, Mengjuan; Liao, Fucheng

    2015-04-01

    In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.

  6. Computer-Aided Breast Cancer Diagnosis with Optimal Feature Sets: Reduction Rules and Optimization Techniques.

    PubMed

    Mathieson, Luke; Mendes, Alexandre; Marsden, John; Pond, Jeffrey; Moscato, Pablo

    2017-01-01

    This chapter introduces a new method for knowledge extraction from databases for the purpose of finding a discriminative set of features that is also a robust set for within-class classification. Our method is generic and we introduce it here in the field of breast cancer diagnosis from digital mammography data. The mathematical formalism is based on a generalization of the k-Feature Set problem called (α, β)-k-Feature Set problem, introduced by Cotta and Moscato (J Comput Syst Sci 67(4):686-690, 2003). This method proceeds in two steps: first, an optimal (α, β)-k-feature set of minimum cardinality is identified and then, a set of classification rules using these features is obtained. We obtain the (α, β)-k-feature set in two phases; first a series of extremely powerful reduction techniques, which do not lose the optimal solution, are employed; and second, a metaheuristic search to identify the remaining features to be considered or disregarded. Two algorithms were tested with a public domain digital mammography dataset composed of 71 malignant and 75 benign cases. Based on the results provided by the algorithms, we obtain classification rules that employ only a subset of these features.

  7. Merits and limitations of optimality criteria method for structural optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Guptill, James D.; Berke, Laszlo

    1993-01-01

    The merits and limitations of the optimality criteria (OC) method for the minimum weight design of structures subjected to multiple load conditions under stress, displacement, and frequency constraints were investigated by examining several numerical examples. The examples were solved utilizing the Optimality Criteria Design Code that was developed for this purpose at NASA Lewis Research Center. This OC code incorporates OC methods available in the literature with generalizations for stress constraints, fully utilized design concepts, and hybrid methods that combine both techniques. Salient features of the code include multiple choices for Lagrange multiplier and design variable update methods, design strategies for several constraint types, variable linking, displacement and integrated force method analyzers, and analytical and numerical sensitivities. The performance of the OC method, on the basis of the examples solved, was found to be satisfactory for problems with few active constraints or with small numbers of design variables. For problems with large numbers of behavior constraints and design variables, the OC method appears to follow a subset of active constraints that can result in a heavier design. The computational efficiency of OC methods appears to be similar to some mathematical programming techniques.

  8. Multiobjective optimization and multivariable control of the beer fermentation process with the use of evolutionary algorithms.

    PubMed

    Andrés-Toro, B; Girón-Sierra, J M; Fernández-Blanco, P; López-Orozco, J A; Besada-Portas, E

    2004-04-01

    This paper describes empirical research on the model, optimization and supervisory control of beer fermentation. Conditions in the laboratory were made as similar as possible to brewery industry conditions. Since mathematical models that consider realistic industrial conditions were not available, a new mathematical model design involving industrial conditions was first developed. Batch fermentations are multiobjective dynamic processes that must be guided along optimal paths to obtain good results. The paper describes a direct way to apply a Pareto set approach with multiobjective evolutionary algorithms (MOEAs). Successful finding of optimal ways to drive these processes were reported. Once obtained, the mathematical fermentation model was used to optimize the fermentation process by using an intelligent control based on certain rules.

  9. Applications of fuzzy theories to multi-objective system optimization

    NASA Technical Reports Server (NTRS)

    Rao, S. S.; Dhingra, A. K.

    1991-01-01

    Most of the computer aided design techniques developed so far deal with the optimization of a single objective function over the feasible design space. However, there often exist several engineering design problems which require a simultaneous consideration of several objective functions. This work presents several techniques of multiobjective optimization. In addition, a new formulation, based on fuzzy theories, is also introduced for the solution of multiobjective system optimization problems. The fuzzy formulation is useful in dealing with systems which are described imprecisely using fuzzy terms such as, 'sufficiently large', 'very strong', or 'satisfactory'. The proposed theory translates the imprecise linguistic statements and multiple objectives into equivalent crisp mathematical statements using fuzzy logic. The effectiveness of all the methodologies and theories presented is illustrated by formulating and solving two different engineering design problems. The first one involves the flight trajectory optimization and the main rotor design of helicopters. The second one is concerned with the integrated kinematic-dynamic synthesis of planar mechanisms. The use and effectiveness of nonlinear membership functions in fuzzy formulation is also demonstrated. The numerical results indicate that the fuzzy formulation could yield results which are qualitatively different from those provided by the crisp formulation. It is felt that the fuzzy formulation will handle real life design problems on a more rational basis.

  10. Neuro-evolutionary computing paradigm for Painlevé equation-II in nonlinear optics

    NASA Astrophysics Data System (ADS)

    Ahmad, Iftikhar; Ahmad, Sufyan; Awais, Muhammad; Ul Islam Ahmad, Siraj; Asif Zahoor Raja, Muhammad

    2018-05-01

    The aim of this study is to investigate the numerical treatment of the Painlevé equation-II arising in physical models of nonlinear optics through artificial intelligence procedures by incorporating a single layer structure of neural networks optimized with genetic algorithms, sequential quadratic programming and active set techniques. We constructed a mathematical model for the nonlinear Painlevé equation-II with the help of networks by defining an error-based cost function in mean square sense. The performance of the proposed technique is validated through statistical analyses by means of the one-way ANOVA test conducted on a dataset generated by a large number of independent runs.

  11. The Design-To-Cost Manifold

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1990-01-01

    Design-to-cost is a popular technique for controlling costs. Although qualitative techniques exist for implementing design to cost, quantitative methods are sparse. In the launch vehicle and spacecraft engineering process, the question whether to minimize mass is usually an issue. The lack of quantification in this issue leads to arguments on both sides. This paper presents a mathematical technique which both quantifies the design-to-cost process and the mass/complexity issue. Parametric cost analysis generates and applies mathematical formulas called cost estimating relationships. In their most common forms, they are continuous and differentiable. This property permits the application of the mathematics of differentiable manifolds. Although the terminology sounds formidable, the application of the techniques requires only a knowledge of linear algebra and ordinary differential equations, common subjects in undergraduate scientific and engineering curricula. When the cost c is expressed as a differentiable function of n system metrics, setting the cost c to be a constant generates an n-1 dimensional subspace of the space of system metrics such that any set of metric values in that space satisfies the constant design-to-cost criterion. This space is a differentiable manifold upon which all mathematical properties of a differentiable manifold may be applied. One important property is that an easily implemented system of ordinary differential equations exists which permits optimization of any function of the system metrics, mass for example, over the design-to-cost manifold. A dual set of equations defines the directions of maximum and minimum cost change. A simplified approximation of the PRICE H(TM) production-production cost is used to generate this set of differential equations over [mass, complexity] space. The equations are solved in closed form to obtain the one dimensional design-to-cost trade and design-for-cost spaces. Preliminary results indicate that cost is relatively insensitive to changes in mass and that the reduction of complexity, both in the manufacturing process and of the spacecraft, is dominant in reducing cost.

  12. Meta-heuristic algorithms as tools for hydrological science

    NASA Astrophysics Data System (ADS)

    Yoo, Do Guen; Kim, Joong Hoon

    2014-12-01

    In this paper, meta-heuristic optimization techniques are introduced and their applications to water resources engineering, particularly in hydrological science are introduced. In recent years, meta-heuristic optimization techniques have been introduced that can overcome the problems inherent in iterative simulations. These methods are able to find good solutions and require limited computation time and memory use without requiring complex derivatives. Simulation-based meta-heuristic methods such as Genetic algorithms (GAs) and Harmony Search (HS) have powerful searching abilities, which can occasionally overcome the several drawbacks of traditional mathematical methods. For example, HS algorithms can be conceptualized from a musical performance process and used to achieve better harmony; such optimization algorithms seek a near global optimum determined by the value of an objective function, providing a more robust determination of musical performance than can be achieved through typical aesthetic estimation. In this paper, meta-heuristic algorithms and their applications (focus on GAs and HS) in hydrological science are discussed by subject, including a review of existing literature in the field. Then, recent trends in optimization are presented and a relatively new technique such as Smallest Small World Cellular Harmony Search (SSWCHS) is briefly introduced, with a summary of promising results obtained in previous studies. As a result, previous studies have demonstrated that meta-heuristic algorithms are effective tools for the development of hydrological models and the management of water resources.

  13. Optimal exposure techniques for iodinated contrast enhanced breast CT

    NASA Astrophysics Data System (ADS)

    Glick, Stephen J.; Makeev, Andrey

    2016-03-01

    Screening for breast cancer using mammography has been very successful in the effort to reduce breast cancer mortality, and its use has largely resulted in the 30% reduction in breast cancer mortality observed since 1990 [1]. However, diagnostic mammography remains an area of breast imaging that is in great need for improvement. One imaging modality proposed for improving the accuracy of diagnostic workup is iodinated contrast-enhanced breast CT [2]. In this study, a mathematical framework is used to evaluate optimal exposure techniques for contrast-enhanced breast CT. The ideal observer signal-to-noise ratio (i.e., d') figure-of-merit is used to provide a task performance based assessment of optimal acquisition parameters under the assumptions of a linear, shift-invariant imaging system. A parallel-cascade model was used to estimate signal and noise propagation through the detector, and a realistic lesion model with iodine uptake was embedded into a structured breast background. Ideal observer performance was investigated across kVp settings, filter materials, and filter thickness. Results indicated many kVp spectra/filter combinations can improve performance over currently used x-ray spectra.

  14. Optimizing pulsed Nd:YAG laser beam welding process parameters to attain maximum ultimate tensile strength for thin AISI316L sheet using response surface methodology and simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Torabi, Amir; Kolahan, Farhad

    2018-07-01

    Pulsed laser welding is a powerful technique especially suitable for joining thin sheet metals. In this study, based on experimental data, pulsed laser welding of thin AISI316L austenitic stainless steel sheet has been modeled and optimized. The experimental data required for modeling are gathered as per Central Composite Design matrix in Response Surface Methodology (RSM) with full replication of 31 runs. Ultimate Tensile Strength (UTS) is considered as the main quality measure in laser welding. Furthermore, the important process parameters including peak power, pulse duration, pulse frequency and welding speed are selected as input process parameters. The relation between input parameters and the output response is established via full quadratic response surface regression with confidence level of 95%. The adequacy of the regression model was verified using Analysis of Variance technique results. The main effects of each factor and the interactions effects with other factors were analyzed graphically in contour and surface plot. Next, to maximum joint UTS, the best combinations of parameters levels were specified using RSM. Moreover, the mathematical model is implanted into a Simulated Annealing (SA) optimization algorithm to determine the optimal values of process parameters. The results obtained by both SA and RSM optimization techniques are in good agreement. The optimal parameters settings for peak power of 1800 W, pulse duration of 4.5 ms, frequency of 4.2 Hz and welding speed of 0.5 mm/s would result in a welded joint with 96% of the base metal UTS. Computational results clearly demonstrate that the proposed modeling and optimization procedures perform quite well for pulsed laser welding process.

  15. Particle swarm optimization applied to automatic lens design

    NASA Astrophysics Data System (ADS)

    Qin, Hua

    2011-06-01

    This paper describes a novel application of Particle Swarm Optimization (PSO) technique to lens design. A mathematical model is constructed, and merit functions in an optical system are employed as fitness functions, which combined radiuses of curvature, thicknesses among lens surfaces and refractive indices regarding an optical system. By using this function, the aberration correction is carried out. A design example using PSO is given. Results show that PSO as optical design tools is practical and powerful, and this method is no longer dependent on the lens initial structure and can arbitrarily create search ranges of structural parameters of a lens system, which is an important step towards automatic design with artificial intelligence.

  16. Applied Mathematical Optimization Technique on Menu Scheduling for Boarding School Student Using Delete-Reshuffle-Reoptimize Algorithm

    NASA Astrophysics Data System (ADS)

    Sufahani, Suliadi; Mohamad, Mahathir; Roslan, Rozaini; Ghazali Kamardan, M.; Che-Him, Norziha; Ali, Maselan; Khalid, Kamal; Nazri, E. M.; Ahmad, Asmala

    2018-04-01

    Boarding school student needs to eat well balanced nutritious food which includes proper calories, vitality and supplements for legitimate development, keeping in mind the end goal is to repair and support the body tissues and averting undesired ailments and disease. Serving healthier menu is a noteworthy stride towards accomplishing that goal. Be that as it may, arranging a nutritious and adjusted menu physically is confounded, wasteful and tedious. This study intends to build up a scientific mathematical model for eating routine arranging that improves and meets the vital supplement consumption for boarding school student aged 13-18 and in addition saving the financial plan. It likewise gives the adaptability for the cook to change any favoured menu even after the ideal arrangement has been produced. A recalculation procedure will be performed in view of the ideal arrangement. The information was gathered from the the Ministry of Education and boarding schools’ authorities. Menu arranging is a notable enhancement issue and part of well-established optimization problem. The model was fathomed by utilizing Binary Programming and “Delete-Reshuffle-Reoptimize Algortihm (DDRA)”.

  17. Optimization of CO2 laser cutting parameters on Austenitic type Stainless steel sheet

    NASA Astrophysics Data System (ADS)

    Parthiban, A.; Sathish, S.; Chandrasekaran, M.; Ravikumar, R.

    2017-03-01

    Thin AISI 316L stainless steel sheet widely used in sheet metal processing industries for specific applications. CO2 laser cutting is one of the most popular sheet metal cutting processes for cutting of sheets in different profile. In present work various cutting parameters such as laser power (2000 watts-4000 watts), cutting speed (3500mm/min - 5500 mm/min) and assist gas pressure (0.7 Mpa-0.9Mpa) for cutting of AISI 316L 2mm thickness stainless sheet. This experimentation was conducted based on Box-Behenken design. The aim of this work is to develop a mathematical model kerf width for straight and curved profile through response surface methodology. The developed mathematical models for straight and curved profile have been compared. The Quadratic models have the best agreement with experimental data, and also the shape of the profile a substantial role in achieving to minimize the kerf width. Finally the numerical optimization technique has been used to find out best optimum laser cutting parameter for both straight and curved profile cut.

  18. Mathematical and Computational Modeling in Complex Biological Systems

    PubMed Central

    Li, Wenyang; Zhu, Xiaoliang

    2017-01-01

    The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology. PMID:28386558

  19. Puerto Rico water resources planning model program description

    USGS Publications Warehouse

    Moody, D.W.; Maddock, Thomas; Karlinger, M.R.; Lloyd, J.J.

    1973-01-01

    Because the use of the Mathematical Programming System -Extended (MPSX) to solve large linear and mixed integer programs requires the preparation of many input data cards, a matrix generator program to produce the MPSX input data from a much more limited set of data may expedite the use of the mixed integer programming optimization technique. The Model Definition and Control Program (MODCQP) is intended to assist a planner in preparing MPSX input data for the Puerto Rico Water Resources Planning Model. The model utilizes a mixed-integer mathematical program to identify a minimum present cost set of water resources projects (diversions, reservoirs, ground-water fields, desalinization plants, water treatment plants, and inter-basin transfers of water) which will meet a set of future water demands and to determine their sequence of construction. While MODCOP was specifically written to generate MPSX input data for the planning model described in this report, the program can be easily modified to reflect changes in the model's mathematical structure.

  20. Mathematical and Computational Modeling in Complex Biological Systems.

    PubMed

    Ji, Zhiwei; Yan, Ke; Li, Wenyang; Hu, Haigen; Zhu, Xiaoliang

    2017-01-01

    The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology.

  1. AMERICAN-SOVIET SYMPOSIUM ON USE OF MATHEMATICAL MODELS TO OPTIMIZE WATER QUALITY MANAGEMENT HELD AT KHARKOV AND ROSTOV-ON-DON, USSR ON DECEMBER 9-16, 1975

    EPA Science Inventory

    The American-Soviet Symposium on Use of Mathematical Models to Optimize Water Quality Management examines methodological questions related to simulation and optimization modeling of processes that determine water quality of river basins. Discussants describe the general state of ...

  2. The Sizing and Optimization Language, (SOL): Computer language for design problems

    NASA Technical Reports Server (NTRS)

    Lucas, Stephen H.; Scotti, Stephen J.

    1988-01-01

    The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.

  3. Optimal Control of Thermo--Fluid Phenomena in Variable Domains

    NASA Astrophysics Data System (ADS)

    Volkov, Oleg; Protas, Bartosz

    2008-11-01

    This presentation concerns our continued research on adjoint--based optimization of viscous incompressible flows (the Navier--Stokes problem) coupled with heat conduction involving change of phase (the Stefan problem), and occurring in domains with variable boundaries. This problem is motivated by optimization of advanced welding techniques used in automotive manufacturing, where the goal is to determine an optimal heat input, so as to obtain a desired shape of the weld pool surface upon solidification. We argue that computation of sensitivities (gradients) in such free--boundary problems requires the use of the shape--differential calculus as a key ingredient. We also show that, with such tools available, the computational solution of the direct and inverse (optimization) problems can in fact be achieved in a similar manner and in a comparable computational time. Our presentation will address certain mathematical and computational aspects of the method. As an illustration we will consider the two--phase Stefan problem with contact point singularities where our approach allows us to obtain a thermodynamically consistent solution.

  4. Nonlinear programming extensions to rational function approximation methods for unsteady aerodynamic forces

    NASA Technical Reports Server (NTRS)

    Tiffany, Sherwood H.; Adams, William M., Jr.

    1988-01-01

    The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.

  5. From analytic inversion to contemporary IMRT optimization: Radiation therapy planning revisited from a mathematical perspective

    PubMed Central

    Censor, Yair; Unkelbach, Jan

    2011-01-01

    In this paper we look at the development of radiation therapy treatment planning from a mathematical point of view. Historically, planning for Intensity-Modulated Radiation Therapy (IMRT) has been considered as an inverse problem. We discuss first the two fundamental approaches that have been investigated to solve this inverse problem: Continuous analytic inversion techniques on one hand, and fully-discretized algebraic methods on the other hand. In the second part of the paper, we review another fundamental question which has been subject to debate from the beginning of IMRT until the present day: The rotation therapy approach versus fixed angle IMRT. This builds a bridge from historic work on IMRT planning to contemporary research in the context of Intensity-Modulated Arc Therapy (IMAT). PMID:21616694

  6. Teko: A block preconditioning capability with concrete example applications in Navier--Stokes and MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cyr, Eric C.; Shadid, John N.; Tuminaro, Raymond S.

    This study describes the design of Teko, an object-oriented C++ library for implementing advanced block preconditioners. Mathematical design criteria that elucidate the needs of block preconditioning libraries and techniques are explained and shown to motivate the structure of Teko. For instance, a principal design choice was for Teko to strongly reflect the mathematical statement of the preconditioners to reduce development burden and permit focus on the numerics. Additional mechanisms are explained that provide a pathway to developing an optimized production capable block preconditioning capability with Teko. Finally, Teko is demonstrated on fluid flow and magnetohydrodynamics applications. In addition to highlightingmore » the features of the Teko library, these new results illustrate the effectiveness of recent preconditioning developments applied to advanced discretization approaches.« less

  7. Teko: A block preconditioning capability with concrete example applications in Navier--Stokes and MHD

    DOE PAGES

    Cyr, Eric C.; Shadid, John N.; Tuminaro, Raymond S.

    2016-10-27

    This study describes the design of Teko, an object-oriented C++ library for implementing advanced block preconditioners. Mathematical design criteria that elucidate the needs of block preconditioning libraries and techniques are explained and shown to motivate the structure of Teko. For instance, a principal design choice was for Teko to strongly reflect the mathematical statement of the preconditioners to reduce development burden and permit focus on the numerics. Additional mechanisms are explained that provide a pathway to developing an optimized production capable block preconditioning capability with Teko. Finally, Teko is demonstrated on fluid flow and magnetohydrodynamics applications. In addition to highlightingmore » the features of the Teko library, these new results illustrate the effectiveness of recent preconditioning developments applied to advanced discretization approaches.« less

  8. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  9. The influence of the free space environment on the superlight-weight thermal protection system: conception, methods, and risk analysis

    NASA Astrophysics Data System (ADS)

    Yatsenko, Vitaliy; Falchenko, Iurii; Fedorchuk, Viktor; Petrushynets, Lidiia

    2016-07-01

    This report focuses on the results of the EU project "Superlight-weight thermal protection system for space application (LIGHT-TPS)". The bottom line is an analysis of influence of the free space environment on the superlight-weight thermal protection system (TPS). This report focuses on new methods that based on the following models: synergetic, physical, and computational. This report concentrates on four approaches. The first concerns the synergetic approach. The synergetic approach to the solution of problems of self-controlled synthesis of structures and creation of self-organizing technologies is considered in connection with the super-problem of creation of materials with new functional properties. Synergetics methods and mathematical design are considered according to actual problems of material science. The second approach describes how the optimization methods can be used to determine material microstructures with optimized or targeted properties. This technique enables one to find unexpected microstructures with exotic behavior (e.g., negative thermal expansion coefficients). The third approach concerns the dynamic probabilistic risk analysis of TPS l elements with complex characterizations for damages using a physical model of TPS system and a predictable level of ionizing radiation and space weather. Focusing is given mainly on the TPS model, mathematical models for dynamic probabilistic risk assessment and software for the modeling and prediction of the influence of the free space environment. The probabilistic risk assessment method for TPS is presented considering some deterministic and stochastic factors. The last approach concerns results of experimental research of the temperature distribution on the surface of the honeycomb sandwich panel size 150 x 150 x 20 mm at the diffusion welding in vacuum are considered. An equipment, which provides alignment of temperature fields in a product for the formation of equal strength of welded joints is considered. Many tasks in computational materials science can be posed as optimization problems. This technique enables one to find unexpected microstructures with exotic behavior (e.g., negative thermal expansion coefficients). The last approach is concerned with the generation of realizations of materials with specified but limited microstructural information: an intriguing inverse problem of both fundamental and practical importance. Computational models based upon the theories of molecular dynamics or quantum mechanics would enable the prediction and modification of fundamental materials properties. This problem is solved using deterministic and stochastic optimization techniques. The main optimization approaches in the frame of the EU project "Superlight-weight thermal protection system for space application" are discussed. Optimization approach to the alloys for obtaining materials with required properties using modeling techniques and experimental data will be also considered. This report is supported by the EU project "Superlight-weight thermal protection system for space application (LIGHT-TPS)"

  10. Central composite rotatable design for investigation of microwave-assisted extraction of ginger (Zingiber officinale)

    NASA Astrophysics Data System (ADS)

    Fadzilah, R. Hanum; Sobhana, B. Arianto; Mahfud, M.

    2015-12-01

    Microwave-assisted extraction technique was employed to extract essential oil from ginger. The optimal condition for microwave assisted extraction of ginger were determined by resposnse surface methodology. A central composite rotatable design was applied to evaluate the effects of three independent variables. The variables is were microwave power 400 - 800W as X1, feed solvent ratio of 0.33 -0.467 as X2 and feed size 1 cm, 0.25 cm and less than 0.2 cm as X3. The correlation analysis of mathematical modelling indicated that quadratic polynomial could be employed to optimize microwave assisted extraction of ginger. The optimal conditions to obtain highest yield of essential oil were : microwave power 597,163 W : feed solvent ratio and size of feed less than 0.2 cm.

  11. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.

  12. Shuttle cryogenics supply system optimization study. Volume 5, B-3, part 2: Appendix to programmers manual for math model

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A computer programmer's manual for a digital computer which will permit rapid and accurate parametric analysis of current and advanced attitude control propulsion systems is presented. The concept is for a cold helium pressurized, subcritical cryogen fluid supplied, bipropellant gas-fed attitude control propulsion system. The cryogen fluids are stored as liquids under low pressure and temperature conditions. The mathematical model provides a generalized form for the procedural technique employed in setting up the analysis program.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobson, Ian; Hiskens, Ian; Linderoth, Jeffrey

    Building on models of electrical power systems, and on powerful mathematical techniques including optimization, model predictive control, and simluation, this project investigated important issues related to the stable operation of power grids. A topic of particular focus was cascading failures of the power grid: simulation, quantification, mitigation, and control. We also analyzed the vulnerability of networks to component failures, and the design of networks that are responsive to and robust to such failures. Numerous other related topics were investigated, including energy hubs and cascading stall of induction machines

  14. Wave Height Characteristics in the North Atlantic Ocean: a New Approach Based on Statistical and Geometrical Techniques

    DTIC Science & Technology

    2011-11-20

    Breivik and Reistad 1994; Lionello et al. 1992, 1995; Abdalla et al. 2005; Emmanouil et al. 2007) and optimization of the direct model outputs by using...neutral winds and new stress tables in WAM. ECMWF Research Department Memo R60.9/JB/0400 Breivik LA, Reistad M (1994) Assimilation of ERS-1...geometry graduate texts in mathematics, vol 120, 2nd edn. Springer-Verlag, Berlin Emmanouil G, Galanis G, Kallos G, Breivik LA, Heilberg H, Reistad M

  15. Parallel algorithms for simulating continuous time Markov chains

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Heidelberger, Philip

    1992-01-01

    We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.

  16. Recent developments of axial flow compressors under transonic flow conditions

    NASA Astrophysics Data System (ADS)

    Srinivas, G.; Raghunandana, K.; Satish Shenoy, B.

    2017-05-01

    The objective of this paper is to give a holistic view of the most advanced technology and procedures that are practiced in the field of turbomachinery design. Compressor flow solver is the turbulence model used in the CFD to solve viscous problems. The popular techniques like Jameson’s rotated difference scheme was used to solve potential flow equation in transonic condition for two dimensional aero foils and later three dimensional wings. The gradient base method is also a popular method especially for compressor blade shape optimization. Various other types of optimization techniques available are Evolutionary algorithms (EAs) and Response surface methodology (RSM). It is observed that in order to improve compressor flow solver and to get agreeable results careful attention need to be paid towards viscous relations, grid resolution, turbulent modeling and artificial viscosity, in CFD. The advanced techniques like Jameson’s rotated difference had most substantial impact on wing design and aero foil. For compressor blade shape optimization, Evolutionary algorithm is quite simple than gradient based technique because it can solve the parameters simultaneously by searching from multiple points in the given design space. Response surface methodology (RSM) is a method basically used to design empirical models of the response that were observed and to study systematically the experimental data. This methodology analyses the correct relationship between expected responses (output) and design variables (input). RSM solves the function systematically in a series of mathematical and statistical processes. For turbomachinery blade optimization recently RSM has been implemented successfully. The well-designed high performance axial flow compressors finds its application in any air-breathing jet engines.

  17. Structural optimization of dental restorations using the principle of adaptive growth.

    PubMed

    Couegnat, Guillaume; Fok, Siu L; Cooper, Jonathan E; Qualtrough, Alison J E

    2006-01-01

    In a restored tooth, the stresses that occur at the tooth-restoration interface during loading could become large enough to fracture the tooth and/or restoration and it has been estimated that 92% of fractured teeth have been previously restored. The tooth preparation process for a dental restoration is a classical optimization problem: tooth reduction must be minimized to preserve tooth tissue whilst stress levels must be kept low to avoid fracture of the restored unit. The objective of the present study was to derive alternative optimized designs for a second upper premolar cavity preparation by means of structural shape optimization based on the finite element method and biological adaptive growth. Three models of cavity preparations were investigated: an inlay design for preparation of a premolar tooth, an undercut cavity design and an onlay preparation. Three restorative materials and several tooth/restoration contact conditions were utilized to replicate the in vitro situation as closely as possible. The optimization process was run for each cavity geometry. Mathematical shape optimization based on biological adaptive growth process was successfully applied to tooth preparations for dental restorations. Significant reduction in stress levels at the tooth-restoration interface where bonding is imperfect was achieved using optimized cavity or restoration shapes. In the best case, the maximum stress value was reduced by more than 50%. Shape optimization techniques can provide an efficient and effective means of reducing the stresses in restored teeth and hence has the potential of prolonging their service lives. The technique can easily be adopted for optimizing other dental restorations.

  18. Optimal Assignment Problem Applications of Finite Mathematics to Business and Economics. [and] Difference Equations with Applications. Applications of Difference Equations to Economics and Social Sciences. [and] Selected Applications of Mathematics to Finance and Investment. Applications of Elementary Algebra to Finance. [and] Force of Interest. Applications of Calculus to Finance. UMAP Units 317, 322, 381, 382.

    ERIC Educational Resources Information Center

    Gale, David; And Others

    Four units make up the contents of this document. The first examines applications of finite mathematics to business and economies. The user is expected to learn the method of optimization in optimal assignment problems. The second module presents applications of difference equations to economics and social sciences, and shows how to: 1) interpret…

  19. Optimal Cost Avoidance Investment and Pricing Strategies for Performance-Based Post-Production Service Contracts

    DTIC Science & Technology

    2011-04-30

    a BS degree in Mathematics and an MS degree in Statistics and Financial and Actuarial Mathematics from Kiev National Taras Shevchenko University...degrees from Rutgers University in Industrial Engineering (PhD and MS) and Statistics (MS) and from Universidad Nacional Autonoma de Mexico in Actuarial ...Science. His research efforts focus on developing mathematical models for the analysis, computation, and optimization of system performance with

  20. Optimizing Discharge Capacity of Li-O 2 Batteries by Design of Air-Electrode Porous Structure: Multifidelity Modeling and Optimization

    DOE PAGES

    Pan, Wenxiao; Yang, Xiu; Bao, Jie; ...

    2017-01-01

    We develop a new mathematical framework to study the optimal design of air electrode microstructures for lithium-oxygen (Li-O2) batteries. It can eectively reduce the number of expensive experiments for testing dierent air-electrodes, thereby minimizing the cost in the design of Li-O2 batteries. The design parameters to characterize an air-electrode microstructure include the porosity, surface-to-volume ratio, and parameters associated with the pore-size distribution. A surrogate model (also known as response surface) for discharge capacity is rst constructed as a function of these design parameters. The surrogate model is accurate and easy to evaluate such that an optimization can be performed basedmore » on it. In particular, a Gaussian process regression method, co-kriging, is employed due to its accuracy and eciency in predicting high-dimensional responses from a combination of multidelity data. Specically, a small amount of data from high-delity simulations are combined with a large number of data obtained from computationally ecient low-delity simulations. The high-delity simulation is based on a multiscale modeling approach that couples the microscale (pore-scale) and macroscale (device-scale) models. Whereas, the low-delity simulation is based on an empirical macroscale model. The constructed response surface provides quantitative understanding and prediction about how air electrode microstructures aect the discharge performance of Li-O2 batteries. The succeeding sensitivity analysis via Sobol indices and optimization via genetic algorithm ultimately oer a reliable guidance on the optimal design of air electrode microstructures. The proposed mathematical framework can be generalized to investigate other new energy storage techniques and materials.« less

  1. Optimizing Discharge Capacity of Li-O 2 Batteries by Design of Air-Electrode Porous Structure: Multifidelity Modeling and Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Wenxiao; Yang, Xiu; Bao, Jie

    We develop a new mathematical framework to study the optimal design of air electrode microstructures for lithium-oxygen (Li-O2) batteries. It can eectively reduce the number of expensive experiments for testing dierent air-electrodes, thereby minimizing the cost in the design of Li-O2 batteries. The design parameters to characterize an air-electrode microstructure include the porosity, surface-to-volume ratio, and parameters associated with the pore-size distribution. A surrogate model (also known as response surface) for discharge capacity is rst constructed as a function of these design parameters. The surrogate model is accurate and easy to evaluate such that an optimization can be performed basedmore » on it. In particular, a Gaussian process regression method, co-kriging, is employed due to its accuracy and eciency in predicting high-dimensional responses from a combination of multidelity data. Specically, a small amount of data from high-delity simulations are combined with a large number of data obtained from computationally ecient low-delity simulations. The high-delity simulation is based on a multiscale modeling approach that couples the microscale (pore-scale) and macroscale (device-scale) models. Whereas, the low-delity simulation is based on an empirical macroscale model. The constructed response surface provides quantitative understanding and prediction about how air electrode microstructures aect the discharge performance of Li-O2 batteries. The succeeding sensitivity analysis via Sobol indices and optimization via genetic algorithm ultimately oer a reliable guidance on the optimal design of air electrode microstructures. The proposed mathematical framework can be generalized to investigate other new energy storage techniques and materials.« less

  2. Robust Constrained Blackbox Optimization with Surrogates

    DTIC Science & Technology

    2015-05-21

    algorithms with OPAL . Mathematical Programming Computation, 6(3):233–254, 2014. 6. M.S. Ouali, H. Aoudjit, and C. Audet. Replacement scheduling of a fleet of...Orban. Optimization of Algorithms with OPAL . Mathematical Programming Computation, 6(3), 233-254, September 2014. DISTRIBUTION A: Distribution

  3. Application of Layered Perforation Profile Control Technique to Low Permeable Reservoir

    NASA Astrophysics Data System (ADS)

    Wei, Sun

    2018-01-01

    it is difficult to satisfy the demand of profile control of complex well section and multi-layer reservoir by adopting the conventional profile control technology, therefore, a research is conducted on adjusting the injection production profile with layered perforating parameters optimization. i.e. in the case of coproduction for multi-layer, water absorption of each layer is adjusted by adjusting the perforating parameters, thus to balance the injection production profile of the whole well section, and ultimately enhance the oil displacement efficiency of water flooding. By applying the relationship between oil-water phase percolation theory/perforating damage and capacity, a mathematic model of adjusting the injection production profile with layered perforating parameters optimization, besides, perforating parameters optimization software is programmed. Different types of optimization design work are carried out according to different geological conditions and construction purposes by using the perforating optimization design software; furthermore, an application test is done for low permeable reservoir, and the water injection profile tends to be balanced significantly after perforation with optimized parameters, thereby getting a good application effect on site.

  4. Optimal cure cycle design of a resin-fiber composite laminate

    NASA Technical Reports Server (NTRS)

    Hou, Jean W.; Sheen, Jeenson

    1987-01-01

    A unified computed aided design method was studied for the cure cycle design that incorporates an optimal design technique with the analytical model of a composite cure process. The preliminary results of using this proposed method for optimal cure cycle design are reported and discussed. The cure process of interest is the compression molding of a polyester which is described by a diffusion reaction system. The finite element method is employed to convert the initial boundary value problem into a set of first order differential equations which are solved simultaneously by the DE program. The equations for thermal design sensitivities are derived by using the direct differentiation method and are solved by the DE program. A recursive quadratic programming algorithm with an active set strategy called a linearization method is used to optimally design the cure cycle, subjected to the given design performance requirements. The difficulty of casting the cure cycle design process into a proper mathematical form is recognized. Various optimal design problems are formulated to address theses aspects. The optimal solutions of these formulations are compared and discussed.

  5. Search for a new economic optimum in the management of household waste in Tiaret city (western Algeria).

    PubMed

    Asnoune, M; Abdelmalek, F; Djelloul, A; Mesghouni, K; Addou, A

    2016-11-01

    In household waste matters, the objective is always to conceive an optimal integrated system of management, where the terms 'optimal' and 'integrated' refer generally to a combination between the waste and the techniques of treatment, valorization and elimination, which often aim at the lowest possible cost. The management optimization of household waste using operational methodologies has not yet been applied in any Algerian district. We proposed an optimization of the valorization of household waste in Tiaret city in order to lower the total management cost. The methodology is modelled by non-linear mathematical equations using 28 variables of decision and aims to assign optimally the seven components of household waste (i.e. plastic, cardboard paper, glass, metals, textiles, organic matter and others) among four centres of treatment [i.e. waste to energy (WTE) or incineration, composting (CM), anaerobic digestion (ANB) or methanization and landfilling (LF)]. The analysis of the obtained results shows that the variation of total cost is mainly due to the assignment of waste among the treatment centres and that certain treatment cannot be applied to household waste in Tiaret city. On the other hand, certain techniques of valorization have been favoured by the optimization. In this work, four scenarios have been proposed to optimize the system cost, where the modelling shows that the mixed scenario (the three treatment centres CM, ANB, LF) suggests a better combination of technologies of waste treatment, with an optimal solution for the system (cost and profit). © The Author(s) 2016.

  6. Review: Optimization methods for groundwater modeling and management

    NASA Astrophysics Data System (ADS)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  7. Discrete mathematics for spatial data classification and understanding

    NASA Astrophysics Data System (ADS)

    Mussio, Luigi; Nocera, Rossella; Poli, Daniela

    1998-12-01

    Data processing, in the field of information technology, requires new tools, involving discrete mathematics, like data compression, signal enhancement, data classification and understanding, hypertexts and multimedia (considering educational aspects too), because the mass of data implies automatic data management and doesn't permit any a priori knowledge. The methodologies and procedures used in this class of problems concern different kinds of segmentation techniques and relational strategies, like clustering, parsing, vectorization, formalization, fitting and matching. On the other hand, the complexity of this approach imposes to perform optimal sampling and outlier detection just at the beginning, in order to define the set of data to be processed: rough data supply very poor information. For these reasons, no hypotheses about the distribution behavior of the data can be generally done and a judgment should be acquired by distribution-free inference only.

  8. SIMD Optimization of Linear Expressions for Programmable Graphics Hardware

    PubMed Central

    Bajaj, Chandrajit; Ihm, Insung; Min, Jungki; Oh, Jinsang

    2009-01-01

    The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of ȳ = Ax̄ + b̄, where A is a matrix, and x̄, ȳ and b̄ are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods. PMID:19946569

  9. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient.

    PubMed

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-06-10

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.

  10. Guaranteed epsilon-optimal treatment plans with the minimum number of beams for stereotactic body radiation therapy

    NASA Astrophysics Data System (ADS)

    Yarmand, Hamed; Winey, Brian; Craft, David

    2013-09-01

    Stereotactic body radiation therapy (SBRT) is characterized by delivering a high amount of dose in a short period of time. In SBRT the dose is delivered using open fields (e.g., beam’s-eye-view) known as ‘apertures’. Mathematical methods can be used for optimizing treatment planning for delivery of sufficient dose to the cancerous cells while keeping the dose to surrounding organs at risk (OARs) minimal. Two important elements of a treatment plan are quality and delivery time. Quality of a plan is measured based on the target coverage and dose to OARs. Delivery time heavily depends on the number of beams used in the plan as the setup times for different beam directions constitute a large portion of the delivery time. Therefore the ideal plan, in which all potential beams can be used, will be associated with a long impractical delivery time. We use the dose to OARs in the ideal plan to find the plan with the minimum number of beams which is guaranteed to be epsilon-optimal (i.e., a predetermined maximum deviation from the ideal plan is guaranteed). Since the treatment plan optimization is inherently a multi-criteria-optimization problem, the planner can navigate the ideal dose distribution Pareto surface and select a plan of desired target coverage versus OARs sparing, and then use the proposed technique to reduce the number of beams while guaranteeing epsilon-optimality. We use mixed integer programming (MIP) for optimization. To reduce the computation time for the resultant MIP, we use two heuristics: a beam elimination scheme and a family of heuristic cuts, known as ‘neighbor cuts’, based on the concept of ‘adjacent beams’. We show the effectiveness of the proposed technique on two clinical cases, a liver and a lung case. Based on our technique we propose an algorithm for fast generation of epsilon-optimal plans.

  11. Optimal Repair And Replacement Policy For A System With Multiple Components

    DTIC Science & Technology

    2016-06-17

    Numerical Demonstration To implement the linear program, we use the Python Programming Language (PSF 2016) with the Pyomo optimization modeling language...opre.1040.0133. Hart, W.E., C. Laird, J. Watson, D.L. Woodruff. 2012. Pyomo–optimization modeling in python , vol. 67. Springer Science & Business...Media. Hart, W.E., J. Watson, D.L. Woodruff. 2011. Pyomo: modeling and solving mathematical programs in python . Mathematical Programming Computation 3(3

  12. Optimizing oil spill cleanup efforts: A tactical approach and evaluation framework.

    PubMed

    Grubesic, Tony H; Wei, Ran; Nelson, Jake

    2017-12-15

    Although anthropogenic oil spills vary in size, duration and severity, their broad impacts on complex social, economic and ecological systems can be significant. Questions pertaining to the operational challenges associated with the tactical allocation of human resources, cleanup equipment and supplies to areas impacted by a large spill are particularly salient when developing mitigation strategies for extreme oiling events. The purpose of this paper is to illustrate the application of advanced oil spill modeling techniques in combination with a developed mathematical model to spatially optimize the allocation of response crews and equipment for cleaning up an offshore oil spill. The results suggest that the detailed simulations and optimization model are a good first step in allowing both communities and emergency responders to proactively plan for extreme oiling events and develop response strategies that minimize the impacts of spills. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Market penetration of energy supply technologies

    NASA Astrophysics Data System (ADS)

    Condap, R. J.

    1980-03-01

    Techniques to incorporate the concepts of profit-induced growth and risk aversion into policy-oriented optimization models of the domestic energy sector are examined. After reviewing the pertinent market penetration literature, simple mathematical programs in which the introduction of new energy technologies is constrained primarily by the reinvestment of profits are formulated. The main results involve the convergence behavior of technology production levels under various assumptions about the form of the energy demand function. Next, profitability growth constraints are embedded in a full-scale model of U.S. energy-economy interactions. A rapidly convergent algorithm is developed to utilize optimal shadow prices in the computation of profitability for individual technologies. Allowance is made for additional policy variables such as government funding and taxation. The result is an optimal deployment schedule for current and future energy technologies which is consistent with the sector's ability to finance capacity expansion.

  14. Example-based human motion denoising.

    PubMed

    Lou, Hui; Chai, Jinxiang

    2010-01-01

    With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.

  15. OPTIMIZATION OF COUNTERCURRENT STAGED PROCESSES.

    DTIC Science & Technology

    CHEMICAL ENGINEERING , OPTIMIZATION), (*DISTILLATION, OPTIMIZATION), INDUSTRIAL PRODUCTION, INDUSTRIAL EQUIPMENT, MATHEMATICAL MODELS, DIFFERENCE EQUATIONS, NONLINEAR PROGRAMMING, BOUNDARY VALUE PROBLEMS, NUMERICAL INTEGRATION

  16. Understanding the Development of Mathematical Work in the Context of the Classroom

    ERIC Educational Resources Information Center

    Kuzniak, Alain; Nechache, Assia; Drouhard, J. P.

    2016-01-01

    According to our approach to mathematics education, the optimal aim of the teaching of mathematics is to assist students in achieving efficient mathematical work. But, what does efficient exactly mean in that case? And how can teachers reach this objective? The model of Mathematical Working Spaces with its three dimensions--semiotic, instrumental,…

  17. CATO: a CAD tool for intelligent design of optical networks and interconnects

    NASA Astrophysics Data System (ADS)

    Chlamtac, Imrich; Ciesielski, Maciej; Fumagalli, Andrea F.; Ruszczyk, Chester; Wedzinga, Gosse

    1997-10-01

    Increasing communication speed requirements have created a great interest in very high speed optical and all-optical networks and interconnects. The design of these optical systems is a highly complex task, requiring the simultaneous optimization of various parts of the system, ranging from optical components' characteristics to access protocol techniques. Currently there are no computer aided design (CAD) tools on the market to support the interrelated design of all parts of optical communication systems, thus the designer has to rely on costly and time consuming testbed evaluations. The objective of the CATO (CAD tool for optical networks and interconnects) project is to develop a prototype of an intelligent CAD tool for the specification, design, simulation and optimization of optical communication networks. CATO allows the user to build an abstract, possible incomplete, model of the system, and determine its expected performance. Based on design constraints provided by the user, CATO will automatically complete an optimum design, using mathematical programming techniques, intelligent search methods and artificial intelligence (AI). Initial design and testing of a CATO prototype (CATO-1) has been completed recently. The objective was to prove the feasibility of combining AI techniques, simulation techniques, an optical device library and a graphical user interface into a flexible CAD tool for obtaining optimal communication network designs in terms of system cost and performance. CATO-1 is an experimental tool for designing packet-switching wavelength division multiplexing all-optical communication systems using a LAN/MAN ring topology as the underlying network. The two specific AI algorithms incorporated are simulated annealing and a genetic algorithm. CATO-1 finds the optimal number of transceivers for each network node, using an objective function that includes the cost of the devices and the overall system performance.

  18. Optimization of Coolant Technique Conditions for Machining A319 Aluminium Alloy Using Response Surface Method (RSM)

    NASA Astrophysics Data System (ADS)

    Zainal Ariffin, S.; Razlan, A.; Ali, M. Mohd; Efendee, A. M.; Rahman, M. M.

    2018-03-01

    Background/Objectives: The paper discusses about the optimum cutting parameters with coolant techniques condition (1.0 mm nozzle orifice, wet and dry) to optimize surface roughness, temperature and tool wear in the machining process based on the selected setting parameters. The selected cutting parameters for this study were the cutting speed, feed rate, depth of cut and coolant techniques condition. Methods/Statistical Analysis Experiments were conducted and investigated based on Design of Experiment (DOE) with Response Surface Method. The research of the aggressive machining process on aluminum alloy (A319) for automotive applications is an effort to understand the machining concept, which widely used in a variety of manufacturing industries especially in the automotive industry. Findings: The results show that the dominant failure mode is the surface roughness, temperature and tool wear when using 1.0 mm nozzle orifice, increases during machining and also can be alternative minimize built up edge of the A319. The exploration for surface roughness, productivity and the optimization of cutting speed in the technical and commercial aspects of the manufacturing processes of A319 are discussed in automotive components industries for further work Applications/Improvements: The research result also beneficial in minimizing the costs incurred and improving productivity of manufacturing firms. According to the mathematical model and equations, generated by CCD based RSM, experiments were performed and cutting coolant condition technique using size nozzle can reduces tool wear, surface roughness and temperature was obtained. Results have been analyzed and optimization has been carried out for selecting cutting parameters, shows that the effectiveness and efficiency of the system can be identified and helps to solve potential problems.

  19. Multi-objective optimization of combustion, performance and emission parameters in a jatropha biodiesel engine using Non-dominated sorting genetic algorithm-II

    NASA Astrophysics Data System (ADS)

    Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar

    2014-03-01

    The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.

  20. Experimental Mathematics and Mathematical Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David H.; Borwein, Jonathan M.; Broadhurst, David

    2009-06-26

    One of the most effective techniques of experimental mathematics is to compute mathematical entities such as integrals, series or limits to high precision, then attempt to recognize the resulting numerical values. Recently these techniques have been applied with great success to problems in mathematical physics. Notable among these applications are the identification of some key multi-dimensional integrals that arise in Ising theory, quantum field theory and in magnetic spin theory.

  1. Mathematical techniques: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Articles on theoretical and applied mathematics are introduced. The articles cover information that might be of interest to workers in statistics and information theory, computational aids that could be used by scientists and engineers, and mathematical techniques for design and control.

  2. Virtual Monoenergetic Images From a Novel Dual-Layer Spectral Detector Computed Tomography Scanner in Portal Venous Phase: Adjusted Window Settings Depending on Assessment Focus Are Essential for Image Interpretation.

    PubMed

    Hickethier, Tilman; Iuga, Andra-Iza; Lennartz, Simon; Hauger, Myriam; Byrtus, Jonathan; Luetkens, Julian A; Haneder, Stefan; Maintz, David; Doerner, Jonas

    We aimed to determine optimal window settings for conventional polyenergetic (PolyE) and virtual monoenergetic images (MonoE) derived from abdominal portal venous phase computed tomography (CT) examinations on a novel dual-layer spectral-detector CT (SDCT). From 50 patients, SDCT data sets MonoE at 40 kiloelectron volt as well as PolyE were reconstructed and best individual window width and level values manually were assessed separately for evaluation of abdominal arteries as well as for liver lesions. Via regression analysis, optimized individual values were mathematically calculated. Subjective image quality parameters, vessel, and liver lesion diameters were measured to determine influences of different W/L settings. Attenuation and contrast-to-noise values were significantly higher in MonoE compared with PolyE. Compared with standard settings, almost all adjusted W/L settings varied significantly and yielded higher subjective scoring. No differences were found between manually adjusted and mathematically calculated W/L settings. PolyE and MonoE from abdominal portal venous phase SDCT examinations require appropriate W/L settings depending on reconstruction technique and assessment focus.

  3. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory.

    PubMed

    Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A

    2016-08-25

    There are several applications in computational biophysics that require the optimization of discrete interacting states, for example, amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of "maximum flow-minimum cut" graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.

  4. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purvine, Emilie AH; Monson, Kyle E.; Jurrus, Elizabeth R.

    There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of maximum flow-minimum cut graph analysis. The interaction energy graph, a graph in which verticesmore » (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered.« less

  5. Energy Minimization of Discrete Protein Titration State Models Using Graph Theory

    PubMed Central

    Purvine, Emilie; Monson, Kyle; Jurrus, Elizabeth; Star, Keith; Baker, Nathan A.

    2016-01-01

    There are several applications in computational biophysics which require the optimization of discrete interacting states; e.g., amino acid titration states, ligand oxidation states, or discrete rotamer angles. Such optimization can be very time-consuming as it scales exponentially in the number of sites to be optimized. In this paper, we describe a new polynomial-time algorithm for optimization of discrete states in macromolecular systems. This algorithm was adapted from image processing and uses techniques from discrete mathematics and graph theory to restate the optimization problem in terms of “maximum flow-minimum cut” graph analysis. The interaction energy graph, a graph in which vertices (amino acids) and edges (interactions) are weighted with their respective energies, is transformed into a flow network in which the value of the minimum cut in the network equals the minimum free energy of the protein, and the cut itself encodes the state that achieves the minimum free energy. Because of its deterministic nature and polynomial-time performance, this algorithm has the potential to allow for the ionization state of larger proteins to be discovered. PMID:27089174

  6. Box-Behnken design for investigation of microwave-assisted extraction of patchouli oil

    NASA Astrophysics Data System (ADS)

    Kusuma, Heri Septya; Mahfud, Mahfud

    2015-12-01

    Microwave-assisted extraction (MAE) technique was employed to extract the essential oil from patchouli (Pogostemon cablin). The optimal conditions for microwave-assisted extraction of patchouli oil were determined by response surface methodology. A Box-Behnken design (BBD) was applied to evaluate the effects of three independent variables (microwave power (A: 400-800 W), plant material to solvent ratio (B: 0.10-0.20 g mL-1) and extraction time (C: 20-60 min)) on the extraction yield of patchouli oil. The correlation analysis of the mathematical-regression model indicated that quadratic polynomial model could be employed to optimize the microwave extraction of patchouli oil. The optimal extraction conditions of patchouli oil was microwave power 634.024 W, plant material to solvent ratio 0.147648 g ml-1 and extraction time 51.6174 min. The maximum patchouli oil yield was 2.80516% under these optimal conditions. Under the extraction condition, the experimental values agreed with the predicted results by analysis of variance. It indicated high fitness of the model used and the success of response surface methodology for optimizing and reflect the expected extraction condition.

  7. A mathematical model for simulating noise suppression of lined ejectors

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.

    1994-01-01

    A mathematical model containing the essential features embodied in the noise suppression of lined ejectors is presented. Although some simplification of the physics is necessary to render the model mathematically tractable, the current model is the most versatile and technologically advanced at the current time. A system of linearized equations and the boundary conditions governing the sound field are derived starting from the equations of fluid dynamics. A nonreflecting boundary condition is developed. In view of the complex nature of the equations, a parametric study requires the use of numerical techniques and modern computers. A finite element algorithm that solves the differential equations coupled with the boundary condition is then introduced. The numerical method results in a matrix equation with several hundred thousand degrees of freedom that is solved efficiently on a supercomputer. The model is validated by comparing results either with exact solutions or with approximate solutions from other works. In each case, excellent correlations are obtained. The usefulness of the model as an optimization tool and the importance of variable impedance liners as a mechanism for achieving broadband suppression within a lined ejector are demonstrated.

  8. Bottle Caps as Prekindergarten Mathematical Tools

    ERIC Educational Resources Information Center

    Raisor, Jill M.; Hudson, Rick A.

    2018-01-01

    Early childhood provides a time of crucial growth in all developmental domains. Prekindergarten is an optimal time for young children to use objects of play as a medium to explore new cognitive concepts, including mathematical structure. Mathematical structure plays an important role in providing students a means to reason about mathematics,…

  9. Numerical study of combustion processes in afterburners

    NASA Technical Reports Server (NTRS)

    Zhou, Xiaoqing; Zhang, Xiaochun

    1986-01-01

    Mathematical models and numerical methods are presented for computer modeling of aeroengine afterburners. A computer code GEMCHIP is described briefly. The algorithms SIMPLER, for gas flow predictions, and DROPLET, for droplet flow calculations, are incorporated in this code. The block correction technique is adopted to facilitate convergence. The method of handling irregular shapes of combustors and flameholders is described. The predicted results for a low-bypass-ratio turbofan afterburner in the cases of gaseous combustion and multiphase spray combustion are provided and analyzed, and engineering guides for afterburner optimization are presented.

  10. Automated design optimization of supersonic airplane wing structures under dynamic constraints

    NASA Technical Reports Server (NTRS)

    Fox, R. L.; Miura, H.; Rao, S. S.

    1972-01-01

    The problems of the preliminary and first level detail design of supersonic aircraft wings are stated as mathematical programs and solved using automated optimum design techniques. The problem is approached in two phases: the first is a simplified equivalent plate model in which the envelope, planform and structural parameters are varied to produce a design, the second is a finite element model with fixed configuration in which the material distribution is varied. Constraints include flutter, aeroelastically computed stresses and deflections, natural frequency and a variety of geometric limitations.

  11. Mathematical model of highways network optimization

    NASA Astrophysics Data System (ADS)

    Sakhapov, R. L.; Nikolaeva, R. V.; Gatiyatullin, M. H.; Makhmutov, M. M.

    2017-12-01

    The article deals with the issue of highways network design. Studies show that the main requirement from road transport for the road network is to ensure the realization of all the transport links served by it, with the least possible cost. The goal of optimizing the network of highways is to increase the efficiency of transport. It is necessary to take into account a large number of factors that make it difficult to quantify and qualify their impact on the road network. In this paper, we propose building an optimal variant for locating the road network on the basis of a mathematical model. The article defines the criteria for optimality and objective functions that reflect the requirements for the road network. The most fully satisfying condition for optimality is the minimization of road and transport costs. We adopted this indicator as a criterion of optimality in the economic-mathematical model of a network of highways. Studies have shown that each offset point in the optimal binding road network is associated with all other corresponding points in the directions providing the least financial costs necessary to move passengers and cargo from this point to the other corresponding points. The article presents general principles for constructing an optimal network of roads.

  12. Topographical optimization of structures for use in musical instruments and other applications

    NASA Astrophysics Data System (ADS)

    Kirkland, William Brandon

    Mallet percussion instruments such as the xylophone, marimba, and vibraphone have been produced and tuned since their inception by arduously grinding the keys to achieve harmonic ratios between their 1st, 2 nd, and 3rd transverse modes. In consideration of this, it would be preferable to have defined mathematical models such that the keys of these instruments can be produced quickly and reliably. Additionally, physical modeling of these keys or beams provides a useful application of non-uniform beam vibrations as studied by Euler-Bernoulli and Timoshenko beam theories. This thesis work presents a literature review of previous studies regarding mallet percussion instrument design and optimization of non-uniform keys. The progression of previous research from strictly mathematical approaches to finite element methods is shown, ultimately arriving at the most current optimization techniques used by other authors. However, previous research varies slightly in the relative degree of accuracy to which a non-uniform beam can be modeled. Typically, accuracies are shown in literature as 1% to 2% error. While this seems attractive, musical tolerances require 0.25% error and beams are otherwise unsuitable. This research seeks to build on and add to the previous field research by optimizing beam topology and machining keys within tolerances that no further tuning is required. The optimization methods relied on finite element analysis and used harmonic modal frequencies as constraints rather than arguments of an error function to be optimized. Instead, the beam mass was minimized while the modal frequency constraints were required to be satisfied within 0.25% tolerance. The final optimized and machined keys of an A4 vibraphone were shown to be accurate within the required musical tolerances, with strong resonance at the designed frequencies. The findings solidify a systematic method for designing musical structures for accuracy and repeatability upon manufacture.

  13. Neuro-fuzzy and neural network techniques for forecasting sea level in Darwin Harbor, Australia

    NASA Astrophysics Data System (ADS)

    Karimi, Sepideh; Kisi, Ozgur; Shiri, Jalal; Makarynskyy, Oleg

    2013-03-01

    Accurate predictions of sea level with different forecast horizons are important for coastal and ocean engineering applications, as well as in land drainage and reclamation studies. The methodology of tidal harmonic analysis, which is generally used for obtaining a mathematical description of the tides, is data demanding requiring processing of tidal observation collected over several years. In the present study, hourly sea levels for Darwin Harbor, Australia were predicted using two different, data driven techniques, adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Multi linear regression (MLR) technique was used for selecting the optimal input combinations (lag times) of hourly sea level. The input combination comprises current sea level as well as five previous level values found to be optimal. For the ANFIS models, five different membership functions namely triangular, trapezoidal, generalized bell, Gaussian and two Gaussian membership function were tested and employed for predicting sea level for the next 1 h, 24 h, 48 h and 72 h. The used ANN models were trained using three different algorithms, namely, Levenberg-Marquardt, conjugate gradient and gradient descent. Predictions of optimal ANFIS and ANN models were compared with those of the optimal auto-regressive moving average (ARMA) models. The coefficient of determination, root mean square error and variance account statistics were used as comparison criteria. The obtained results indicated that triangular membership function was optimal for predictions with the ANFIS models while adaptive learning rate and Levenberg-Marquardt were most suitable for training the ANN models. Consequently, ANFIS and ANN models gave similar forecasts and performed better than the developed for the same purpose ARMA models for all the prediction intervals.

  14. Valuing hydrological alteration in Multi-Objective reservoir management

    NASA Astrophysics Data System (ADS)

    Bizzi, S.; Pianosi, F.; Soncini-Sessa, R.

    2012-04-01

    Water management through dams and reservoirs is worldwide necessary to support key human-related activities ranging from hydropower production to water allocation for agricultural production, and flood risk mitigation. Advances in multi-objectives (MO) optimization techniques and ever growing computing power make it possible to design reservoir operating policies that represent Pareto-optimal tradeoffs between the multiple interests analysed. These progresses if on one hand are likely to enhance performances of commonly targeted objectives (such as hydropower production or water supply), on the other risk to strongly penalize all the interests not directly (i.e. mathematically) optimized within the MO algorithm. Alteration of hydrological regime, although is a well established cause of ecological degradation and its evaluation and rehabilitation are commonly required by recent legislation (as the Water Framework Directive in Europe), is rarely embedded as an objective in MO planning of optimal releases from reservoirs. Moreover, even when it is explicitly considered, the criteria adopted for its evaluation are doubted and not commonly trusted, undermining the possibility of real implementation of environmentally friendly policies. The main challenges in defining and assessing hydrological alterations are: how to define a reference state (referencing); how to define criteria upon which to build mathematical indicators of alteration (measuring); and finally how to aggregate the indicators in a single evaluation index that can be embedded in a MO optimization problem (valuing). This paper aims to address these issues by: i) discussing benefits and constrains of different approaches to referencing, measuring and valuing hydrological alteration; ii) testing two alternative indices of hydrological alteration in the context of MO problems, one based on the established framework of Indices of Hydrological Alteration (IHA, Richter et al., 1996), and a novel satisfying the mathematical properties required by widely used optimization methods based on dynamic programming; iii) discussing the ranking provided by the proposed indices for a case study in Italy where different operating policies were designed using a MO algorithm, taking into account hydropower production, irrigation supply and flood mitigation and imposing different type of minimum environmental flow; iv) providing a framework to effectively include hydrological alteration within MO problem of reservoir management. Richter, B.D., Baumgartner, J.V., Powell, J., Braun, D.P., 1996, A Method for Assessing Hydrologic Alteration within Ecosystems, Conservation Biology, 10(4), 1163-1174.

  15. An automated approach to the design of decision tree classifiers

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Chin, R.; Beaudet, P.

    1982-01-01

    An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.

  16. Structural optimization: Status and promise

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.

    Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)

  17. Optimization Research of Generation Investment Based on Linear Programming Model

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  18. Hardware-in-the-Loop Modeling and Simulation Methods for Daylight Systems in Buildings

    NASA Astrophysics Data System (ADS)

    Mead, Alex Robert

    This dissertation introduces hardware-in-the-loop modeling and simulation techniques to the daylighting community, with specific application to complex fenestration systems. No such application of this class of techniques, optimally combining mathematical-modeling and physical-modeling experimentation, is known to the author previously in the literature. Daylighting systems in buildings have a large impact on both the energy usage of a building as well as the occupant experience within a space. As such, a renewed interest has been placed on designing and constructing buildings with an emphasis on daylighting in recent times as part of the "green movement.''. Within daylighting systems, a specific subclass of building envelope is receiving much attention: complex fenestration systems (CFSs). CFSs are unique as compared to regular fenestration systems (e.g. glazing) in the regard that they allow for non-specular transmission of daylight into a space. This non-specular nature can be leveraged by designers to "optimize'' the times of the day and the days of the year that daylight enters a space. Examples of CFSs include: Venetian blinds, woven fabric shades, and prismatic window coatings. In order to leverage the non-specular transmission properties of CFSs, however, engineering analysis techniques capable of faithfully representing the physics of these systems are needed. Traditionally, the analysis techniques available to the daylighting community fall broadly into three classes: simplified techniques, mathematical-modeling and simulation, and physical-modeling and experimentation. Simplified techniques use "rules-of-thumb'' heuristics to provide insights for simple daylighting systems. Mathematical-modeling and simulation use complex numerical models to provide more detailed insights into system performance. Finally, physical-models can be instrumented and excited using artificial and natural light sources to provide performance insight into a daylighting system. Each class of techniques, broadly speaking however, has advantages and disadvantages with respect to the cost of execution (e.g. money, time, expertise) and the fidelity of the provided insight into the performance of the daylighting system. This varying tradeoff of cost and insight between the techniques determines which techniques are employed for which projects. Daylighting systems with CFS components, however, when considered for simulation with respect to these traditional technique classes, defy high fidelity analysis. Simplified techniques are clearly not applicable. Mathematical-models must have great complexity in order to capture the non-specular transmission accurately, which greatly limit their applicability. This leaves physical modeling, the most costly, as the preferred method for CFS. While mathematical-modeling and simulation methods do exist, they are in general costly and and still approximations of the underlying CFS behavior. Meaning in fact, measurements of CFSs are currently the only practical method to capture the behavior of CFSs. Traditional measurements of CFSs transmission and reflection properties are conducted using an instrument called a goniophotometer and produce a measurement in the form of a Bidirectional Scatter Distribution Function (BSDF) based on the Klems Basis. This measurement must be executed for each possible state of the CFS, hence only a subset of the possible behaviors can be captured for CFSs with continuously varying configurations. In the current era of rapid prototyping (e.g. 3D printing) and automated control of buildings including daylighting systems, a new analysis technique is needed which can faithfully represent these CFSs which are being designed and constructed at an increasing rate. Hardware-in-the-loop modeling and simulation is a perfect fit to the current need of analyzing daylighting systems with CFSs. In the proposed hardware-in-the-loop modeling and simulation approach of this dissertation, physical-models of real CFSs are excited using either natural or artificial light. The exiting luminance distribution from these CFSs is measured and used as inputs to a Radiance mathematical-model of the interior of the space, which is proposed to be lit by the CFS containing daylighting system. Hence, the components of the total daylighting and building system which are not mathematically-modeled well, the CFS, are physically excited and measured, while the components which are modeled properly, namely the interior building space, are mathematically-modeled. In order to excite and measure CFSs behavior, a novel parallel goniophotometer, referred to as the CUBE 2.0, is developed in this dissertation. The CUBE 2.0 measures the input illuminance distribution and the output luminance distribution with respect to a CFS under test. Further, the process is fully automated allowing for deployable experiments on proposed building sites, as well as in laboratory based experiments. In this dissertation, three CFSs, two commercially available and one novel--Twitchell's Textilene 80 Black, Twitchell's Shade View Ebony, and Translucent Concrete Panels (TCP)--are simulated on the CUBE 2.0 system for daylong deployments at one minute time steps. These CFSs are assumed to be placed in the glazing space within the Reference Office Radiance model, for which horizontal illuminance on a work plane of 0.8 m height is calculated for each time step. While Shade View Ebony and TCPs are unmeasured CFSs with respect to BSDF, Textilene 80 Black has been previously measured. As such a validation of the CUBE 2.0 using the goniophotometer measured BSDF is presented, with measurement errors of the horizontal illuminance between +3% and -10%. These error levels are considered to be valid within experimental daylighting investigations. Non-validated results are also presented in full for both Shade View Ebony as well as TCP. Concluding remarks and future directions for HWiL simulation close the dissertation.

  19. Math in Motion: Origami in the Classroom. A Hands-On Creative Approach to Teaching Mathematics. K-8.

    ERIC Educational Resources Information Center

    Pearl, Barbara

    This perfect bound teacher's guide presents techniques and activities to teach mathematics using origami paper folding. Part 1 includes a history of origami, mathematics and origami, and careers using mathematics. Parts 2 and 3 introduce paper-folding concepts and teaching techniques and include suggestions for low-budget paper resources. Part 4…

  20. CometBoards Users Manual Release 1.0

    NASA Technical Reports Server (NTRS)

    Guptill, James D.; Coroneos, Rula M.; Patnaik, Surya N.; Hopkins, Dale A.; Berke, Lazlo

    1996-01-01

    Several nonlinear mathematical programming algorithms for structural design applications are available at present. These include the sequence of unconstrained minimizations technique, the method of feasible directions, and the sequential quadratic programming technique. The optimality criteria technique and the fully utilized design concept are two other structural design methods. A project was undertaken to bring all these design methods under a common computer environment so that a designer can select any one of these tools that may be suitable for his/her application. To facilitate selection of a design algorithm, to validate and check out the computer code, and to ascertain the relative merits of the design tools, modest finite element structural analysis programs based on the concept of stiffness and integrated force methods have been coupled to each design method. The code that contains both these design and analysis tools, by reading input information from analysis and design data files, can cast the design of a structure as a minimum-weight optimization problem. The code can then solve it with a user-specified optimization technique and a user-specified analysis method. This design code is called CometBoards, which is an acronym for Comparative Evaluation Test Bed of Optimization and Analysis Routines for the Design of Structures. This manual describes for the user a step-by-step procedure for setting up the input data files and executing CometBoards to solve a structural design problem. The manual includes the organization of CometBoards; instructions for preparing input data files; the procedure for submitting a problem; illustrative examples; and several demonstration problems. A set of 29 structural design problems have been solved by using all the optimization methods available in CometBoards. A summary of the optimum results obtained for these problems is appended to this users manual. CometBoards, at present, is available for Posix-based Cray and Convex computers, Iris and Sun workstations, and the VM/CMS system.

  1. Engineering tradeoff problems viewed as multiple objective optimizations and the VODCA methodology

    NASA Astrophysics Data System (ADS)

    Morgan, T. W.; Thurgood, R. L.

    1984-05-01

    This paper summarizes a rational model for making engineering tradeoff decisions. The model is a hybrid from the fields of social welfare economics, communications, and operations research. A solution methodology (Vector Optimization Decision Convergence Algorithm - VODCA) firmly grounded in the economic model is developed both conceptually and mathematically. The primary objective for developing the VODCA methodology was to improve the process for extracting relative value information about the objectives from the appropriate decision makers. This objective was accomplished by employing data filtering techniques to increase the consistency of the relative value information and decrease the amount of information required. VODCA is applied to a simplified hypothetical tradeoff decision problem. Possible use of multiple objective analysis concepts and the VODCA methodology in product-line development and market research are discussed.

  2. Repeated applications of a transdermal patch: analytical solution and optimal control of the delivery rate.

    PubMed

    Simon, L

    2007-10-01

    The integral transform technique was implemented to solve a mathematical model developed for percutaneous drug absorption. The model included repeated application and removal of a patch from the skin. Fick's second law of diffusion was used to study the transport of a medicinal agent through the vehicle and subsequent penetration into the stratum corneum. Eigenmodes and eigenvalues were computed and introduced into an inversion formula to estimate the delivery rate and the amount of drug in the vehicle and the skin. A dynamic programming algorithm calculated the optimal doses necessary to achieve a desired transdermal flux. The analytical method predicted profiles that were in close agreement with published numerical solutions and provided an automated strategy to perform therapeutic drug monitoring and control.

  3. Computer Aided Learning of Mathematics: Software Evaluation

    ERIC Educational Resources Information Center

    Yushau, B.; Bokhari, M. A.; Wessels, D. C. J.

    2004-01-01

    Computer Aided Learning of Mathematics (CALM) has been in use for some time in the Prep-Year Mathematics Program at King Fahd University of Petroleum & Minerals. Different kinds of software (both locally designed and imported) have been used in the quest of optimizing the recitation/problem session hour of the mathematics classes. This paper…

  4. From analytic inversion to contemporary IMRT optimization: radiation therapy planning revisited from a mathematical perspective.

    PubMed

    Censor, Yair; Unkelbach, Jan

    2012-04-01

    In this paper we look at the development of radiation therapy treatment planning from a mathematical point of view. Historically, planning for Intensity-Modulated Radiation Therapy (IMRT) has been considered as an inverse problem. We discuss first the two fundamental approaches that have been investigated to solve this inverse problem: Continuous analytic inversion techniques on one hand, and fully-discretized algebraic methods on the other hand. In the second part of the paper, we review another fundamental question which has been subject to debate from the beginning of IMRT until the present day: The rotation therapy approach versus fixed angle IMRT. This builds a bridge from historic work on IMRT planning to contemporary research in the context of Intensity-Modulated Arc Therapy (IMAT). Copyright © 2011 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. A design procedure for the handling qualities optimization of the X-29A aircraft

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.; Cox, Timothy H.

    1989-01-01

    A design technique for handling qualities improvement was developed for the X-29A aircraft. As with any new aircraft, the X-29A control law designers were presented with a relatively high degree of uncertainty in their mathematical models. The presence of uncertainties, and the high level of static instability of the X-29A caused the control law designers to stress stability and robustness over handling qualities. During flight test, the mathematical models of the vehicle were validated or corrected to match the vehicle dynamic behavior. The updated models were then used to fine tune the control system to provide fighter-like handling characteristics. A design methodology was developed which works within the existing control system architecture to provide improved handling qualities and acceptable stability with a minimum of cost in both implementation as well as software verification and validation.

  6. Sparse QSAR modelling methods for therapeutic and regenerative medicine

    NASA Astrophysics Data System (ADS)

    Winkler, David A.

    2018-02-01

    The quantitative structure-activity relationships method was popularized by Hansch and Fujita over 50 years ago. The usefulness of the method for drug design and development has been shown in the intervening years. As it was developed initially to elucidate which molecular properties modulated the relative potency of putative agrochemicals, and at a time when computing resources were scarce, there is much scope for applying modern mathematical methods to improve the QSAR method and to extending the general concept to the discovery and optimization of bioactive molecules and materials more broadly. I describe research over the past two decades where we have rebuilt the unit operations of the QSAR method using improved mathematical techniques, and have applied this valuable platform technology to new important areas of research and industry such as nanoscience, omics technologies, advanced materials, and regenerative medicine. This paper was presented as the 2017 ACS Herman Skolnik lecture.

  7. CADLIVE toolbox for MATLAB: automatic dynamic modeling of biochemical networks with comprehensive system analysis.

    PubMed

    Inoue, Kentaro; Maeda, Kazuhiro; Miyabe, Takaaki; Matsuoka, Yu; Kurata, Hiroyuki

    2014-09-01

    Mathematical modeling has become a standard technique to understand the dynamics of complex biochemical systems. To promote the modeling, we had developed the CADLIVE dynamic simulator that automatically converted a biochemical map into its associated mathematical model, simulated its dynamic behaviors and analyzed its robustness. To enhance the feasibility by CADLIVE and extend its functions, we propose the CADLIVE toolbox available for MATLAB, which implements not only the existing functions of the CADLIVE dynamic simulator, but also the latest tools including global parameter search methods with robustness analysis. The seamless, bottom-up processes consisting of biochemical network construction, automatic construction of its dynamic model, simulation, optimization, and S-system analysis greatly facilitate dynamic modeling, contributing to the research of systems biology and synthetic biology. This application can be freely downloaded from http://www.cadlive.jp/CADLIVE_MATLAB/ together with an instruction.

  8. From immunology to MRI data anlysis: Problems in mathematical biology

    NASA Astrophysics Data System (ADS)

    Waters, Ryan Samuel

    This thesis represents a collection of four distinct biological projects rising from immunology and metabolomics that required unique and creative mathematical approaches. One project focuses on understanding the role IL-2 plays in immune response regulation and exploring how these effects can be altered. We developed several dynamic models of the receptor signaling network which we analyze analytically and numerically. In a second project focused also on MS, we sought to create a system for grading magnetic resonance images (MRI) with good correlation with disability. The goal is for these MRI scores to provide a better standard for large-scale clinical drug trials, which limits the bias associated with differences in available MRI technology and general grader/participant variability. The third project involves the study of the CRISPR adaptive immune system in bacteria. Bacterial cells recognize and acquire snippets of exogenous genetic material, which they incorporate into their DNA. In this project we explore the optimal design for the CRISPR system given a viral distribution to maximize its probability of survival. The final project involves the study of the benefits for colocalization of coupled enzymes in metabolic pathways. The hypothesized kinetic advantage, known as `channeling', of putting coupled enzymes closer together has been used as justification for the colocalization of coupled enzymes in biological systems. We developed and analyzed a simple partial differential equation of the diffusion of the intermediate substrate between coupled enzymes to explore the phenomena of channeling. The four projects of my thesis represent very distinct biological problems that required a variety of techniques from diverse areas of mathematics ranging from dynamical modeling to statistics, Fourier series and calculus of variations. In each case, quantitative techniques were used to address biological questions from a mathematical perspective ultimately providing insight back to the biological problems which motivated them.

  9. A Multifaceted Mathematical Approach for Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, F.; Anitescu, M.; Bell, J.

    2012-03-07

    Applied mathematics has an important role to play in developing the tools needed for the analysis, simulation, and optimization of complex problems. These efforts require the development of the mathematical foundations for scientific discovery, engineering design, and risk analysis based on a sound integrated approach for the understanding of complex systems. However, maximizing the impact of applied mathematics on these challenges requires a novel perspective on approaching the mathematical enterprise. Previous reports that have surveyed the DOE's research needs in applied mathematics have played a key role in defining research directions with the community. Although these reports have had significantmore » impact, accurately assessing current research needs requires an evaluation of today's challenges against the backdrop of recent advances in applied mathematics and computing. To address these needs, the DOE Applied Mathematics Program sponsored a Workshop for Mathematics for the Analysis, Simulation and Optimization of Complex Systems on September 13-14, 2011. The workshop had approximately 50 participants from both the national labs and academia. The goal of the workshop was to identify new research areas in applied mathematics that will complement and enhance the existing DOE ASCR Applied Mathematics Program efforts that are needed to address problems associated with complex systems. This report describes recommendations from the workshop and subsequent analysis of the workshop findings by the organizing committee.« less

  10. Mathematical models for optimization of the centrifugal stage of a refrigerating compressor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuzhdin, A.S.

    1987-09-01

    The authors describe a general approach to the creating of mathematical models of energy and head losses in the flow part of the centrifugal compressor. The mathematical model of the pressure head and efficiency of a two-section stage proposed in this paper is meant for determining its characteristics for the assigned geometric dimensions and for optimizing by variance calculations. Characteristic points on the plot of velocity distribution over the margin of the vanes of the impeller and the diffuser of the centrifugal stage with a combined diffuser are presented. To assess the reliability of the mathematical model the authors comparedmore » some calculated data with the experimental ones.« less

  11. Process optimization for osmo-dehydrated carambola (Averrhoa carambola L) slices and its storage studies.

    PubMed

    Roopa, N; Chauhan, O P; Raju, P S; Das Gupta, D K; Singh, R K R; Bawa, A S

    2014-10-01

    An osmotic-dehydration process protocol for Carambola (Averrhoacarambola L.,), an exotic star shaped tropical fruit, was developed. The process was optimized using Response Surface Methodology (RSM) following Central Composite Rotatable Design (CCRD). The experimental variables selected for the optimization were soak solution concentration (°Brix), soaking temperature (°C) and soaking time (min) with 6 experiments at central point. The effect of process variables was studied on solid gain and water loss during osmotic dehydration process. The data obtained were analyzed employing multiple regression technique to generate suitable mathematical models. Quadratic models were found to fit well (R(2), 95.58 - 98.64 %) in describing the effect of variables on the responses studied. The optimized levels of the process variables were achieved at 70°Brix, 48 °C and 144 min for soak solution concentration, soaking temperature and soaking time, respectively. The predicted and experimental results at optimized levels of variables showed high correlation. The osmo-dehydrated product prepared at optimized conditions showed a shelf-life of 10, 8 and 6 months at 5 °C, ambient (30 ± 2 °C) and 37 °C, respectively.

  12. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization

    PubMed Central

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods. PMID:28222194

  13. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization.

    PubMed

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods.

  14. Optimal Facility-Location

    PubMed Central

    Goldman, A. J.

    2006-01-01

    Dr. Christoph Witzgall, the honoree of this Symposium, can count among his many contributions to applied mathematics and mathematical operations research a body of widely-recognized work on the optimal location of facilities. The present paper offers to non-specialists a sketch of that field and its evolution, with emphasis on areas most closely related to Witzgall’s research at NBS/NIST. PMID:27274920

  15. Can modeling of HIV treatment processes improve outcomes? Capitalizing on an operations research approach to the global pandemic

    PubMed Central

    Xiong, Wei; Hupert, Nathaniel; Hollingsworth, Eric B; O'Brien, Megan E; Fast, Jessica; Rodriguez, William R

    2008-01-01

    Background Mathematical modeling has been applied to a range of policy-level decisions on resource allocation for HIV care and treatment. We describe the application of classic operations research (OR) techniques to address logistical and resource management challenges in HIV treatment scale-up activities in resource-limited countries. Methods We review and categorize several of the major logistical and operational problems encountered over the last decade in the global scale-up of HIV care and antiretroviral treatment for people with AIDS. While there are unique features of HIV care and treatment that pose significant challenges to effective modeling and service improvement, we identify several analogous OR-based solutions that have been developed in the service, industrial, and health sectors. Results HIV treatment scale-up includes many processes that are amenable to mathematical and simulation modeling, including forecasting future demand for services; locating and sizing facilities for maximal efficiency; and determining optimal staffing levels at clinical centers. Optimization of clinical and logistical processes through modeling may improve outcomes, but successful OR-based interventions will require contextualization of response strategies, including appreciation of both existing health care systems and limitations in local health workforces. Conclusion The modeling techniques developed in the engineering field of operations research have wide potential application to the variety of logistical problems encountered in HIV treatment scale-up in resource-limited settings. Increasing the number of cross-disciplinary collaborations between engineering and public health will help speed the appropriate development and application of these tools. PMID:18680594

  16. Mathematical Modelling of Optimization of Structures of Monolithic Coverings Based on Liquid Rubbers

    NASA Astrophysics Data System (ADS)

    Turgumbayeva, R. Kh; Abdikarimov, M. N.; Mussabekov, R.; Sartayev, D. T.

    2018-05-01

    The paper considers optimization of monolithic coatings compositions using a computer and MPE methods. The goal of the paper was to construct a mathematical model of the complete factorial experiment taking into account its plan and conditions. Several regression equations were received. Dependence between content components and parameters of rubber, as well as the quantity of a rubber crumb, was considered. An optimal composition for manufacturing the material of monolithic coatings compositions was recommended based on experimental data.

  17. ALARA: The next link in a chain of activation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, P.P.H.; Henderson, D.L.

    1996-12-31

    The Adaptive Laplace and Analytic Radioactivity Analysis [ALARA] code has been developed as the next link in the chain of DKR radioactivity codes. Its methods address the criticisms of DKR while retaining its best features. While DKR ignored loops in the transmutation/decay scheme to preserve the exactness of the mathematical solution, ALARA incorporates new computational approaches without jeopardizing the most important features of DKR`s physical modelling and mathematical methods. The physical model uses `straightened-loop, linear chains` to achieve the same accuracy in the loop solutions as is demanded in the rest of the scheme. In cases where a chain hasmore » no loops, the exact DKR solution is used. Otherwise, ALARA adaptively chooses between a direct Laplace inversion technique and a Laplace expansion inversion technique to optimize the accuracy and speed of the solution. All of these methods result in matrix solutions which allow the fastest and most accurate solution of exact pulsing histories. Since the entire history is solved for each chain as it is created, ALARA achieves the optimum combination of high accuracy, high speed and low memory usage. 8 refs., 2 figs.« less

  18. On designing for quality

    NASA Technical Reports Server (NTRS)

    Vajingortin, L. D.; Roisman, W. P.

    1991-01-01

    The problem of ensuring the required quality of products and/or technological processes often becomes more difficult due to the fact that there is not general theory of determining the optimal sets of value of the primary factors, i.e., of the output parameters of the parts and units comprising an object and ensuring the correspondence of the object's parameters to the quality requirements. This is the main reason for the amount of time taken to finish complex vital article. To create this theory, one has to overcome a number of difficulties and to solve the following tasks: the creation of reliable and stable mathematical models showing the influence of the primary factors on the output parameters; finding a new technique of assigning tolerances for primary factors with regard to economical, technological, and other criteria, the technique being based on the solution of the main problem; well reasoned assignment of nominal values for primary factors which serve as the basis for creating tolerances. Each of the above listed tasks is of independent importance. An attempt is made to give solutions for this problem. The above problem dealing with quality ensuring an mathematically formalized aspect is called the multiple inverse problem.

  19. Mathematical Inversion of Lightning Data: Techniques and Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William

    2003-01-01

    A survey of some interesting mathematical inversion studies dealing with radio, optical, and electrostatic measurements of lightning are presented. A discussion of why NASA is interested in lightning, what specific physical properties of lightning are retrieved, and what mathematical techniques are used to perform the retrievals are discussed. In particular, a relatively new multi-station VHF time-of-arrival (TOA) antenna network is now on-line in Northern Alabama and will be discussed. The network, called the Lightning Mapping Array (LMA), employs GPS timing and detects VHF radiation from discrete segments (effectively point emitters) that comprise the channel of lightning strokes within cloud and ground flashes. The LMA supports on-going ground-validation activities of the low Earth orbiting Lightning Imaging Sensor (LIS) satellite developed at NASA Marshall Space Flight Center (MSFC) in Huntsville, Alabama. The LMA also provides detailed studies of the distribution and evolution of thunderstorms and lightning in the Tennessee Valley, and offers interesting comparisons with other meteorological/geophysical datasets. In order to take full advantage of these benefits, it is essential that the LMA channel mapping accuracy (in both space and time) be fully characterized and optimized. A new channel mapping retrieval algorithm is introduced for this purpose. To characterize the spatial distribution of retrieval errors, the algorithm has been applied to analyze literally tens of millions of computer-simulated lightning VHF point sources that have been placed at various ranges, azimuths, and altitudes relative to the LMA network. Statistical results are conveniently summarized in high-resolution, color-coded, error maps.

  20. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarepisheh, M; Li, R; Xing, L

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) andmore » aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves quality of resultant treatment plans as compared with conventional VMAT or IMRT treatments.« less

  1. Locating helicopter emergency medical service bases to optimise population coverage versus average response time.

    PubMed

    Garner, Alan A; van den Berg, Pieter L

    2017-10-16

    New South Wales (NSW), Australia has a network of multirole retrieval physician staffed helicopter emergency medical services (HEMS) with seven bases servicing a jurisdiction with population concentrated along the eastern seaboard. The aim of this study was to estimate optimal HEMS base locations within NSW using advanced mathematical modelling techniques. We used high resolution census population data for NSW from 2011 which divides the state into areas containing 200-800 people. Optimal HEMS base locations were estimated using the maximal covering location problem facility location optimization model and the average response time model, exploring the number of bases needed to cover various fractions of the population for a 45 min response time threshold or minimizing the overall average response time to all persons, both in green field scenarios and conditioning on the current base structure. We also developed a hybrid mathematical model where average response time was optimised based on minimum population coverage thresholds. Seven bases could cover 98% of the population within 45mins when optimised for coverage or reach the entire population of the state within an average of 21mins if optimised for response time. Given the existing bases, adding two bases could either increase the 45 min coverage from 91% to 97% or decrease the average response time from 21mins to 19mins. Adding a single specialist prehospital rapid response HEMS to the area of greatest population concentration decreased the average state wide response time by 4mins. The optimum seven base hybrid model that was able to cover 97.75% of the population within 45mins, and all of the population in an average response time of 18 mins included the rapid response HEMS model. HEMS base locations can be optimised based on either percentage of the population covered, or average response time to the entire population. We have also demonstrated a hybrid technique that optimizes response time for a given number of bases and minimum defined threshold of population coverage. Addition of specialized rapid response HEMS services to a system of multirole retrieval HEMS may reduce overall average response times by improving access in large urban areas.

  2. Secondary Math Instructional Practices, Academic Optimism, Instructional Leadership, and Receptivity to Curricular Change in Schools with High and Low Mathematics Mastery and Poverty

    ERIC Educational Resources Information Center

    Kennedy, Scott J.

    2014-01-01

    The purpose of this study was to investigate how high school mathematics teachers' descriptions of academic optimism, responsive teaching, technological pedagogical content knowledge, formative assessment, reflective practice, supervisor instructional leadership, and receptivity to change are related to student mastery on a NYS Regents exam in…

  3. Design Features of Pedagogically-Sound Software in Mathematics.

    ERIC Educational Resources Information Center

    Haase, Howard; And Others

    Weaknesses in educational software currently available in the domain of mathematics are discussed. A technique that was used for the design and production of mathematics software aimed at improving problem-solving skills which combines sound pedagogy and innovative programming is presented. To illustrate the design portion of this technique, a…

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    The Second SIAM Conference on Computational Science and Engineering was held in San Diego from February 10-12, 2003. Total conference attendance was 553. This is a 23% increase in attendance over the first conference. The focus of this conference was to draw attention to the tremendous range of major computational efforts on large problems in science and engineering, to promote the interdisciplinary culture required to meet these large-scale challenges, and to encourage the training of the next generation of computational scientists. Computational Science & Engineering (CS&E) is now widely accepted, along with theory and experiment, as a crucial third modemore » of scientific investigation and engineering design. Aerospace, automotive, biological, chemical, semiconductor, and other industrial sectors now rely on simulation for technical decision support. For federal agencies also, CS&E has become an essential support for decisions on resources, transportation, and defense. CS&E is, by nature, interdisciplinary. It grows out of physical applications and it depends on computer architecture, but at its heart are powerful numerical algorithms and sophisticated computer science techniques. From an applied mathematics perspective, much of CS&E has involved analysis, but the future surely includes optimization and design, especially in the presence of uncertainty. Another mathematical frontier is the assimilation of very large data sets through such techniques as adaptive multi-resolution, automated feature search, and low-dimensional parameterization. The themes of the 2003 conference included, but were not limited to: Advanced Discretization Methods; Computational Biology and Bioinformatics; Computational Chemistry and Chemical Engineering; Computational Earth and Atmospheric Sciences; Computational Electromagnetics; Computational Fluid Dynamics; Computational Medicine and Bioengineering; Computational Physics and Astrophysics; Computational Solid Mechanics and Materials; CS&E Education; Meshing and Adaptivity; Multiscale and Multiphysics Problems; Numerical Algorithms for CS&E; Discrete and Combinatorial Algorithms for CS&E; Inverse Problems; Optimal Design, Optimal Control, and Inverse Problems; Parallel and Distributed Computing; Problem-Solving Environments; Software and Wddleware Systems; Uncertainty Estimation and Sensitivity Analysis; and Visualization and Computer Graphics.« less

  5. Mathematical Techniques for Nonlinear System Theory.

    DTIC Science & Technology

    1978-01-01

    4. TITLE (and Subtitle) 5. TYPE OF REPORT 6 PERIOD COVERED MATHEMATICAL TECHNIQUES FOR NONLINEAR SYSTEM THEORY Interim 6...ADDRESS 10. PROGRAM ELEMENT. PROJECT . TASK AREA & WORK UNIT NUMBERS Unlvers].ty of Flori.da Center for Mathematical System Theory ~~~~ Gainesville , FL...rings”, Mathematical System Theory , 9: 327—344. E. D. SONTAG (1976b1 “Linear systems over commutative rings: a survey”, Richerche di Automatica, 7: 1-34

  6. Concurrent Probabilistic Simulation of High Temperature Composite Structural Response

    NASA Technical Reports Server (NTRS)

    Abdi, Frank

    1996-01-01

    A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.

  7. Three-dimensional electrical impedance tomography: a topology optimization approach.

    PubMed

    Mello, Luís Augusto Motta; de Lima, Cícero Ribeiro; Amato, Marcelo Britto Passos; Lima, Raul Gonzalez; Silva, Emílio Carlos Nelli

    2008-02-01

    Electrical impedance tomography is a technique to estimate the impedance distribution within a domain, based on measurements on its boundary. In other words, given the mathematical model of the domain, its geometry and boundary conditions, a nonlinear inverse problem of estimating the electric impedance distribution can be solved. Several impedance estimation algorithms have been proposed to solve this problem. In this paper, we present a three-dimensional algorithm, based on the topology optimization method, as an alternative. A sequence of linear programming problems, allowing for constraints, is solved utilizing this method. In each iteration, the finite element method provides the electric potential field within the model of the domain. An electrode model is also proposed (thus, increasing the accuracy of the finite element results). The algorithm is tested using numerically simulated data and also experimental data, and absolute resistivity values are obtained. These results, corresponding to phantoms with two different conductive materials, exhibit relatively well-defined boundaries between them, and show that this is a practical and potentially useful technique to be applied to monitor lung aeration, including the possibility of imaging a pneumothorax.

  8. An approach of the exact linearization techniques to analysis of population dynamics of the mosquito Aedes aegypti.

    PubMed

    Dos Reis, Célia A; Florentino, Helenice de O; Cólon, Diego; Rosa, Suélia R Fleury; Cantane, Daniela R

    2018-05-01

    Dengue fever, chikungunya and zika are caused by different viruses and mainly transmitted by Aedes aegypti mosquitoes. These diseases have received special attention of public health officials due to the large number of infected people in tropical and subtropical countries and the possible sequels that those diseases can cause. In severe cases, the infection can have devastating effects, affecting the central nervous system, muscles, brain and respiratory system, often resulting in death. Vaccines against these diseases are still under development and, therefore, current studies are focused on the treatment of diseases and vector (mosquito) control. This work focuses on this last topic, and presents the analysis of a mathematical model describing the population dynamics of Aedes aegypti, as well as present the design of a control law for the mosquito population (vector control) via exact linearization techniques and optimal control. This control strategy optimizes the use of resources for vector control, and focuses on the aquatic stage of the mosquito life. Theoretical and computational results are also presented. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Development of mathematical models and optimization of the process parameters of laser surface hardened EN25 steel using elitist non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Vignesh, S.; Dinesh Babu, P.; Surya, G.; Dinesh, S.; Marimuthu, P.

    2018-02-01

    The ultimate goal of all production entities is to select the process parameters that would be of maximum strength, minimum wear and friction. The friction and wear are serious problems in most of the industries which are influenced by the working set of parameters, oxidation characteristics and mechanism involved in formation of wear. The experimental input parameters such as sliding distance, applied load, and temperature are utilized in finding out the optimized solution for achieving the desired output responses such as coefficient of friction, wear rate, and volume loss. The optimization is performed with the help of a novel method, Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) based on an evolutionary algorithm. The regression equations obtained using Response Surface Methodology (RSM) are used in determining the optimum process parameters. Further, the results achieved through desirability approach in RSM are compared with that of the optimized solution obtained through NSGA-II. The results conclude that proposed evolutionary technique is much effective and faster than the desirability approach.

  10. The aggregated unfitted finite element method for elliptic problems

    NASA Astrophysics Data System (ADS)

    Badia, Santiago; Verdugo, Francesc; Martín, Alberto F.

    2018-07-01

    Unfitted finite element techniques are valuable tools in different applications where the generation of body-fitted meshes is difficult. However, these techniques are prone to severe ill conditioning problems that obstruct the efficient use of iterative Krylov methods and, in consequence, hinders the practical usage of unfitted methods for realistic large scale applications. In this work, we present a technique that addresses such conditioning problems by constructing enhanced finite element spaces based on a cell aggregation technique. The presented method, called aggregated unfitted finite element method, is easy to implement, and can be used, in contrast to previous works, in Galerkin approximations of coercive problems with conforming Lagrangian finite element spaces. The mathematical analysis of the new method states that the condition number of the resulting linear system matrix scales as in standard finite elements for body-fitted meshes, without being affected by small cut cells, and that the method leads to the optimal finite element convergence order. These theoretical results are confirmed with 2D and 3D numerical experiments.

  11. Problem based learning with scaffolding technique on geometry

    NASA Astrophysics Data System (ADS)

    Bayuningsih, A. S.; Usodo, B.; Subanti, S.

    2018-05-01

    Geometry as one of the branches of mathematics has an important role in the study of mathematics. This research aims to explore the effectiveness of Problem Based Learning (PBL) with scaffolding technique viewed from self-regulation learning toward students’ achievement learning in mathematics. The research data obtained through mathematics learning achievement test and self-regulated learning (SRL) questionnaire. This research employed quasi-experimental research. The subjects of this research are students of the junior high school in Banyumas Central Java. The result of the research showed that problem-based learning model with scaffolding technique is more effective to generate students’ mathematics learning achievement than direct learning (DL). This is because in PBL model students are more able to think actively and creatively. The high SRL category student has better mathematic learning achievement than middle and low SRL categories, and then the middle SRL category has better than low SRL category. So, there are interactions between learning model with self-regulated learning in increasing mathematic learning achievement.

  12. Optimal Cluster Mill Pass Scheduling With an Accurate and Rapid New Strip Crown Model

    NASA Astrophysics Data System (ADS)

    Malik, Arif S.; Grandhi, Ramana V.; Zipf, Mark E.

    2007-05-01

    Besides the requirement to roll coiled sheet at high levels of productivity, the optimal pass scheduling of cluster-type reversing cold mills presents the added challenge of assigning mill parameters that facilitate the best possible strip flatness. The pressures of intense global competition, and the requirements for increasingly thinner, higher quality specialty sheet products that are more difficult to roll, continue to force metal producers to commission innovative flatness-control technologies. This means that during the on-line computerized set-up of rolling mills, the mathematical model should not only determine the minimum total number of passes and maximum rolling speed, it should simultaneously optimize the pass-schedule so that desired flatness is assured, either by manual or automated means. In many cases today, however, on-line prediction of strip crown and corresponding flatness for the complex cluster-type rolling mills is typically addressed either by trial and error, by approximate deflection models for equivalent vertical roll-stacks, or by non-physical pattern recognition style models. The abundance of the aforementioned methods is largely due to the complexity of cluster-type mill configurations and the lack of deflection models with sufficient accuracy and speed for on-line use. Without adequate assignment of the pass-schedule set-up parameters, it may be difficult or impossible to achieve the required strip flatness. In this paper, we demonstrate optimization of cluster mill pass-schedules using a new accurate and rapid strip crown model. This pass-schedule optimization includes computations of the predicted strip thickness profile to validate mathematical constraints. In contrast to many of the existing methods for on-line prediction of strip crown and flatness on cluster mills, the demonstrated method requires minimal prior tuning and no extensive training with collected mill data. To rapidly and accurately solve the multi-contact problem and predict the strip crown, a new customized semi-analytical modeling technique that couples the Finite Element Method (FEM) with classical solid mechanics was developed to model the deflection of the rolls and strip while under load. The technique employed offers several important advantages over traditional methods to calculate strip crown, including continuity of elastic foundations, non-iterative solution when using predetermined foundation moduli, continuous third-order displacement fields, simple stress-field determination, and a comparatively faster solution time.

  13. Modeling Photo-Bleaching Kinetics to Create High Resolution Maps of Rod Rhodopsin in the Human Retina

    PubMed Central

    Ehler, Martin; Dobrosotskaya, Julia; Cunningham, Denise; Wong, Wai T.; Chew, Emily Y.; Czaja, Wojtek; Bonner, Robert F.

    2015-01-01

    We introduce and describe a novel non-invasive in-vivo method for mapping local rod rhodopsin distribution in the human retina over a 30-degree field. Our approach is based on analyzing the brightening of detected lipofuscin autofluorescence within small pixel clusters in registered imaging sequences taken with a commercial 488nm confocal scanning laser ophthalmoscope (cSLO) over a 1 minute period. We modeled the kinetics of rhodopsin bleaching by applying variational optimization techniques from applied mathematics. The physical model and the numerical analysis with its implementation are outlined in detail. This new technique enables the creation of spatial maps of the retinal rhodopsin and retinal pigment epithelium (RPE) bisretinoid distribution with an ≈ 50μm resolution. PMID:26196397

  14. A nonlinear model for gas chromatograph systems

    NASA Technical Reports Server (NTRS)

    Feinberg, M. P.

    1975-01-01

    Fundamental engineering design techniques and concepts were studied for the optimization of a gas chromatograph-mass spectrometer chemical analysis system suitable for use on an unmanned, Martian roving vehicle. Previously developed mathematical models of the gas chromatograph are found to be inadequate for predicting peak heights and spreading for some experimental conditions and chemical systems. A modification to the existing equilibrium adsorption model is required; the Langmuir isotherm replaces the linear isotherm. The numerical technique of Crank-Nicolson was studied for use with the linear isotherm to determine the utility of the method. Modifications are made to the method eliminate unnecessary calculations which result in an overall reduction of the computation time of about 42 percent. The Langmuir isotherm is considered which takes into account the composition-dependent effects on the thermodynamic parameter, mRo.

  15. Predictive display design for the vehicles with time delay in dynamic response

    NASA Astrophysics Data System (ADS)

    Efremov, A. V.; Tiaglik, M. S.; Irgaleev, I. H.; Efremov, E. V.

    2018-02-01

    The two ways for the improvement of flying qualities are considered: the predictive display (PD) and the predictive display integrated with the flight control system (FCS). The both ways allow to transforming the controlled element dynamics in the crossover frequency range, to improve the accuracy of tracking and to suppress the effect of time delay in the vehicle response too. The technique for optimization of the predictive law is applied to the landing task. The results of the mathematical modeling and experimental investigations carried out for this task are considered in the paper.

  16. Variation in dielectric properties due to pathological changes in human liver.

    PubMed

    Peyman, Azadeh; Kos, Bor; Djokić, Mihajlo; Trotovšek, Blaž; Limbaeck-Stokin, Clara; Serša, Gregor; Miklavčič, Damijan

    2015-12-01

    Dielectric properties of freshly excised human liver tissues (in vitro) with several pathological conditions including cancer were obtained in frequency range 100 MHz-5 GHz. Differences in dielectric behavior of normal and pathological tissues at microwave frequencies are discussed based on histological information for each tissue. Data presented are useful for many medical applications, in particular nanosecond pulsed electroporation techniques. Knowledge of dielectric properties is vital for mathematical calculations of local electric field distribution inside electroporated tissues and can be used to optimize the process of electroporation for treatment planning procedures. © 2015 Wiley Periodicals, Inc.

  17. Constrained Optimization Methods in Health Services Research-An Introduction: Report 1 of the ISPOR Optimization Methods Emerging Good Practices Task Force.

    PubMed

    Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S

    2017-03-01

    Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  18. Optimal design of the rotor geometry of line-start permanent magnet synchronous motor using the bat algorithm

    NASA Astrophysics Data System (ADS)

    Knypiński, Łukasz

    2017-12-01

    In this paper an algorithm for the optimization of excitation system of line-start permanent magnet synchronous motors will be presented. For the basis of this algorithm, software was developed in the Borland Delphi environment. The software consists of two independent modules: an optimization solver, and a module including the mathematical model of a synchronous motor with a self-start ability. The optimization module contains the bat algorithm procedure. The mathematical model of the motor has been developed in an Ansys Maxwell environment. In order to determine the functional parameters of the motor, additional scripts in Visual Basic language were developed. Selected results of the optimization calculation are presented and compared with results for the particle swarm optimization algorithm.

  19. Mathematical modeling of a Ti:sapphire solid-state laser

    NASA Technical Reports Server (NTRS)

    Swetits, John J.

    1987-01-01

    The project initiated a study of a mathematical model of a tunable Ti:sapphire solid-state laser. A general mathematical model was developed for the purpose of identifying design parameters which will optimize the system, and serve as a useful predictor of the system's behavior.

  20. Design optimization of steel frames using an enhanced firefly algorithm

    NASA Astrophysics Data System (ADS)

    Carbas, Serdar

    2016-12-01

    Mathematical modelling of real-world-sized steel frames under the Load and Resistance Factor Design-American Institute of Steel Construction (LRFD-AISC) steel design code provisions, where the steel profiles for the members are selected from a table of steel sections, turns out to be a discrete nonlinear programming problem. Finding the optimum design of such design optimization problems using classical optimization techniques is difficult. Metaheuristic algorithms provide an alternative way of solving such problems. The firefly algorithm (FFA) belongs to the swarm intelligence group of metaheuristics. The standard FFA has the drawback of being caught up in local optima in large-sized steel frame design problems. This study attempts to enhance the performance of the FFA by suggesting two new expressions for the attractiveness and randomness parameters of the algorithm. Two real-world-sized design examples are designed by the enhanced FFA and its performance is compared with standard FFA as well as with particle swarm and cuckoo search algorithms.

  1. Experimental Investigation and Optimization of TIG Welding Parameters on Aluminum 6061 Alloy Using Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, Rishi; Mevada, N. Ramesh; Rathore, Santosh; Agarwal, Nitin; Rajput, Vinod; Sinh Barad, AjayPal

    2017-08-01

    To improve Welding quality of aluminum (Al) plate, the TIG Welding system has been prepared, by which Welding current, Shielding gas flow rate and Current polarity can be controlled during Welding process. In the present work, an attempt has been made to study the effect of Welding current, current polarity, and shielding gas flow rate on the tensile strength of the weld joint. Based on the number of parameters and their levels, the Response Surface Methodology technique has been selected as the Design of Experiment. For understanding the influence of input parameters on Ultimate tensile strength of weldment, ANOVA analysis has been carried out. Also to describe and optimize TIG Welding using a new metaheuristic Nature - inspired algorithm which is called as Firefly algorithm which was developed by Dr. Xin-She Yang at Cambridge University in 2007. A general formulation of firefly algorithm is presented together with an analytical, mathematical modeling to optimize the TIG Welding process by a single equivalent objective function.

  2. Performance improvement of an active vibration absorber subsystem for an aircraft model using a bees algorithm based on multi-objective intelligent optimization

    NASA Astrophysics Data System (ADS)

    Zarchi, Milad; Attaran, Behrooz

    2017-11-01

    This study develops a mathematical model to investigate the behaviour of adaptable shock absorber dynamics for the six-degree-of-freedom aircraft model in the taxiing phase. The purpose of this research is to design a proportional-integral-derivative technique for control of an active vibration absorber system using a hydraulic nonlinear actuator based on the bees algorithm. This optimization algorithm is inspired by the natural intelligent foraging behaviour of honey bees. The neighbourhood search strategy is used to find better solutions around the previous one. The parameters of the controller are adjusted by minimizing the aircraft's acceleration and impact force as the multi-objective function. The major advantages of this algorithm over other optimization algorithms are its simplicity, flexibility and robustness. The results of the numerical simulation indicate that the active suspension increases the comfort of the ride for passengers and the fatigue life of the structure. This is achieved by decreasing the impact force, displacement and acceleration significantly.

  3. Optimizing Cellular Networks Enabled with Renewal Energy via Strategic Learning.

    PubMed

    Sohn, Insoo; Liu, Huaping; Ansari, Nirwan

    2015-01-01

    An important issue in the cellular industry is the rising energy cost and carbon footprint due to the rapid expansion of the cellular infrastructure. Greening cellular networks has thus attracted attention. Among the promising green cellular network techniques, the renewable energy-powered cellular network has drawn increasing attention as a critical element towards reducing carbon emissions due to massive energy consumption in the base stations deployed in cellular networks. Game theory is a branch of mathematics that is used to evaluate and optimize systems with multiple players with conflicting objectives and has been successfully used to solve various problems in cellular networks. In this paper, we model the green energy utilization and power consumption optimization problem of a green cellular network as a pilot power selection strategic game and propose a novel distributed algorithm based on a strategic learning method. The simulation results indicate that the proposed algorithm achieves correlated equilibrium of the pilot power selection game, resulting in optimum green energy utilization and power consumption reduction.

  4. A New Conflict Resolution Method for Multiple Mobile Robots in Cluttered Environments With Motion-Liveness.

    PubMed

    Shahriari, Mohammadali; Biglarbegian, Mohammad

    2018-01-01

    This paper presents a new conflict resolution methodology for multiple mobile robots while ensuring their motion-liveness, especially for cluttered and dynamic environments. Our method constructs a mathematical formulation in a form of an optimization problem by minimizing the overall travel times of the robots subject to resolving all the conflicts in their motion. This optimization problem can be easily solved through coordinating only the robots' speeds. To overcome the computational cost in executing the algorithm for very cluttered environments, we develop an innovative method through clustering the environment into independent subproblems that can be solved using parallel programming techniques. We demonstrate the scalability of our approach through performing extensive simulations. Simulation results showed that our proposed method is capable of resolving the conflicts of 100 robots in less than 1.23 s in a cluttered environment that has 4357 intersections in the paths of the robots. We also developed an experimental testbed and demonstrated that our approach can be implemented in real time. We finally compared our approach with other existing methods in the literature both quantitatively and qualitatively. This comparison shows while our approach is mathematically sound, it is more computationally efficient, scalable for very large number of robots, and guarantees the live and smooth motion of robots.

  5. Quantifying uncertainty in partially specified biological models: how can optimal control theory help us?

    PubMed

    Adamson, M W; Morozov, A Y; Kuzenkov, O A

    2016-09-01

    Mathematical models in biology are highly simplified representations of a complex underlying reality and there is always a high degree of uncertainty with regards to model function specification. This uncertainty becomes critical for models in which the use of different functions fitting the same dataset can yield substantially different predictions-a property known as structural sensitivity. Thus, even if the model is purely deterministic, then the uncertainty in the model functions carries through into uncertainty in model predictions, and new frameworks are required to tackle this fundamental problem. Here, we consider a framework that uses partially specified models in which some functions are not represented by a specific form. The main idea is to project infinite dimensional function space into a low-dimensional space taking into account biological constraints. The key question of how to carry out this projection has so far remained a serious mathematical challenge and hindered the use of partially specified models. Here, we propose and demonstrate a potentially powerful technique to perform such a projection by using optimal control theory to construct functions with the specified global properties. This approach opens up the prospect of a flexible and easy to use method to fulfil uncertainty analysis of biological models.

  6. Inferring neural activity from BOLD signals through nonlinear optimization.

    PubMed

    Vakorin, Vasily A; Krakovska, Olga O; Borowsky, Ron; Sarty, Gordon E

    2007-11-01

    The blood oxygen level-dependent (BOLD) fMRI signal does not measure neuronal activity directly. This fact is a key concern for interpreting functional imaging data based on BOLD. Mathematical models describing the path from neural activity to the BOLD response allow us to numerically solve the inverse problem of estimating the timing and amplitude of the neuronal activity underlying the BOLD signal. In fact, these models can be viewed as an advanced substitute for the impulse response function. In this work, the issue of estimating the dynamics of neuronal activity from the observed BOLD signal is considered within the framework of optimization problems. The model is based on the extended "balloon" model and describes the conversion of neuronal signals into the BOLD response through the transitional dynamics of the blood flow-inducing signal, cerebral blood flow, cerebral blood volume and deoxyhemoglobin concentration. Global optimization techniques are applied to find a control input (the neuronal activity and/or the biophysical parameters in the model) that causes the system to follow an admissible solution to minimize discrepancy between model and experimental data. As an alternative to a local linearization (LL) filtering scheme, the optimization method escapes the linearization of the transition system and provides a possibility to search for the global optimum, avoiding spurious local minima. We have found that the dynamics of the neural signals and the physiological variables as well as the biophysical parameters can be robustly reconstructed from the BOLD responses. Furthermore, it is shown that spiking off/on dynamics of the neural activity is the natural mathematical solution of the model. Incorporating, in addition, the expansion of the neural input by smooth basis functions, representing a low-pass filtering, allows us to model local field potential (LFP) solutions instead of spiking solutions.

  7. Evaluation of orbits with incomplete knowledge of the mathematical expectancy and the matrix of covariation of errors

    NASA Technical Reports Server (NTRS)

    Bakhshiyan, B. T.; Nazirov, R. R.; Elyasberg, P. E.

    1980-01-01

    The problem of selecting the optimal algorithm of filtration and the optimal composition of the measurements is examined assuming that the precise values of the mathematical expectancy and the matrix of covariation of errors are unknown. It is demonstrated that the optimal algorithm of filtration may be utilized for making some parameters more precise (for example, the parameters of the gravitational fields) after preliminary determination of the elements of the orbit by a simpler method of processing (for example, the method of least squares).

  8. Simultaneous beam sampling and aperture shape optimization for SPORT.

    PubMed

    Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei

    2015-02-01

    Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.

  9. Simultaneous beam sampling and aperture shape optimization for SPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decisionmore » variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. Conclusions: The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.« less

  10. Computerized proof techniques for undergraduates

    NASA Astrophysics Data System (ADS)

    Smith, Christopher J.; Tefera, Akalu; Zeleke, Aklilu

    2012-12-01

    The use of computer algebra systems such as Maple and Mathematica is becoming increasingly important and widespread in mathematics learning, teaching and research. In this article, we present computerized proof techniques of Gosper, Wilf-Zeilberger and Zeilberger that can be used for enhancing the teaching and learning of topics in discrete mathematics. We demonstrate by examples how one can use these computerized proof techniques to raise students' interests in the discovery and proof of mathematical identities and enhance their problem-solving skills.

  11. Topology optimization of natural convection: Flow in a differentially heated cavity

    NASA Astrophysics Data System (ADS)

    Saglietti, Clio; Schlatter, Philipp; Berggren, Martin; Henningson, Dan

    2017-11-01

    The goal of the present work is to develop methods for optimization of the design of natural convection cooled heat sinks, using resolved simulation of both fluid flow and heat transfer. We rely on mathematical programming techniques combined with direct numerical simulations in order to iteratively update the topology of a solid structure towards optimality, i.e. until the design yielding the best performance is found, while satisfying a specific set of constraints. The investigated test case is a two-dimensional differentially heated cavity, in which the two vertical walls are held at different temperatures. The buoyancy force induces a swirling convective flow around a solid structure, whose topology is optimized to maximize the heat flux through the cavity. We rely on the spectral-element code Nek5000 to compute a high-order accurate solution of the natural convection flow arising from the conjugate heat transfer in the cavity. The laminar, steady-state solution of the problem is evaluated with a time-marching scheme that has an increased convergence rate; the actual iterative optimization is obtained using a steepest-decent algorithm, and the gradients are conveniently computed using the continuous adjoint equations for convective heat transfer.

  12. Tailoring Modified Moore Method Techniques to Liberal Arts Mathematics Courses

    ERIC Educational Resources Information Center

    Hitchman, Theron J.; Shaw, Douglas

    2015-01-01

    Inquiry-based learning (IBL) techniques can be used in mathematics courses for non-majors, such as courses required for liberal arts majors to fulfill graduation requirements. Unique challenges are discussed, followed by adaptations of IBL techniques to overcome those challenges.

  13. The relation between learning mathematics and students' competencies in undesrtanding texts

    NASA Astrophysics Data System (ADS)

    Hapipi, Azmi, Syahrul; Sripatmi, Amrullah

    2017-08-01

    This study was a descriptive study that aimed to gain an overview on the relation between learning mathematics and students' competencies in understanding texts. This research was classified as an ex post facto study due in part to the variable studied is the variable that was already happening. While the technique of taking the sample using stratified proportional sampling techniques. These techniques have been selected for the condition of the population, in the context of learning mathematics, diverse and also tiered. The results of this study indicate that there is a relationship between learning mathematics and students' competencies in understanding texts.

  14. Mathematical Modeling of Thermofrictional Milling Process Using ANSYS WB Software

    NASA Astrophysics Data System (ADS)

    Sherov, K. T.; Sikhimbayev, M. R.; Sherov, A. K.; Donenbayev, B. S.; Rakishev, A. K.; Mazdubai, A. B.; Musayev, M. M.; Abeuova, A. M.

    2017-06-01

    This article presents ANSYS WB-based mathematical modelling of the thermofrictional milling process, which allowed studying the dynamics of thermal and physical processes occurring during the processing. The technique used also allows determination of the optimal cutting conditions of thermofrictional milling for processing various materials, in particular steel 40CN2MA, 30CGSA, 45, 3sp. In our study, from among a number of existing models of cutting fracture, we chose the criterion first proposed by prof. V. L. Kolmogorov. In order to increase the calculations performance, a mathematical model was proposed, that used only two objects: a parallelepiped-shaped workpiece and a cutting insert in the form of a pentagonal prism. In addition, the work takes into account the friction coefficient between a cutting insert and a workpiece taken equal to 0.4 mm. To determine the temperature in the subcontact layer of the workpiece, we introduced the coordinates of nine characteristic points with the same interval in the local coordinate system. As a result, the temperature values were obtained for different materials at the studied points during the cutter speed change. The research results showed the possibility of controlling thermal processes during processing by choosing the optimum cutting modes.

  15. GlobiPack v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe

    2010-03-31

    GlobiPack contains a small collection of optimization globalization algorithms. These algorithms are used by optimization and various nonlinear equation solver algorithms.Used as the line-search procedure with Newton and Quasi-Newton optimization and nonlinear equation solver methods. These are standard published 1-D line search algorithms such as are described in the book Nocedal and Wright Numerical Optimization: 2nd edition, 2006. One set of algorithms were copied and refactored from the existing open-source Trilinos package MOOCHO where the linear search code is used to globalize SQP methods. This software is generic to any mathematical optimization problem where smooth derivatives exist. There is nomore » specific connection or mention whatsoever to any specific application, period. You cannot find more general mathematical software.« less

  16. Multi-Positioning Mathematics Class Size: Teachers' Views

    ERIC Educational Resources Information Center

    Handal, Boris; Watson, Kevin; Maher, Marguerite

    2015-01-01

    This paper explores mathematics teachers' perceptions about class size and the impact class size has on teaching and learning in secondary mathematics classrooms. It seeks to understand teachers' views about optimal class sizes and their thoughts about the education variables that influence these views. The paper draws on questionnaire responses…

  17. Issues in Teaching Mathematics

    ERIC Educational Resources Information Center

    Ediger, Marlow

    2013-01-01

    In this article, the author states that there are selected issues in mathematics instruction that educators should be well aware of when planning lessons and units of study. These issues provide a basis for thought and discussion when assisting pupils to attain more optimally. Purposeful studying of issues guides mathematics teachers in…

  18. Towards nanometric resolution in multilayer depth profiling: a comparative study of RBS, SIMS, XPS and GDOES.

    PubMed

    Escobar Galindo, Ramón; Gago, Raul; Duday, David; Palacio, Carlos

    2010-04-01

    An increasing amount of effort is currently being directed towards the development of new functionalized nanostructured materials (i.e., multilayers and nanocomposites). Using an appropriate combination of composition and microstructure, it is possible to optimize and tailor the final properties of the material to its final application. The analytical characterization of these new complex nanostructures requires high-resolution analytical techniques that are able to provide information about surface and depth composition at the nanometric level. In this work, we comparatively review the state of the art in four different depth-profiling characterization techniques: Rutherford backscattering spectroscopy (RBS), secondary ion mass spectrometry (SIMS), X-ray photoelectron spectroscopy (XPS) and glow discharge optical emission spectroscopy (GDOES). In addition, we predict future trends in these techniques regarding improvements in their depth resolutions. Subnanometric resolution can now be achieved in RBS using magnetic spectrometry systems. In SIMS, the use of rotating sample holders and oxygen flooding during analysis as well as the optimization of floating low-energy ion guns to lower the impact energy of the primary ions improves the depth resolution of the technique. Angle-resolved XPS provides a very powerful and nondestructive technique for obtaining depth profiling and chemical information within the range of a few monolayers. Finally, the application of mathematical tools (deconvolution algorithms and a depth-profiling model), pulsed sources and surface plasma cleaning procedures is expected to greatly improve GDOES depth resolution.

  19. Mathematical Intelligence and Mathematical Creativity: A Causal Relationship

    ERIC Educational Resources Information Center

    Tyagi, Tarun Kumar

    2017-01-01

    This study investigated the causal relationship between mathematical creativity and mathematical intelligence. Four hundred thirty-nine 8th-grade students, age ranged from 11 to 14 years, were included in the sample of this study by random cluster technique on which mathematical creativity and Hindi adaptation of mathematical intelligence test…

  20. Modelling the evolution of drug resistance in the presence of antiviral drugs

    PubMed Central

    Wu, Jianhong; Yan, Ping; Archibald, Chris

    2007-01-01

    Background The emergence of drug resistance in treated populations and the transmission of drug resistant strains to newly infected individuals are important public health concerns in the prevention and control of infectious diseases such as HIV and influenza. Mathematical modelling may help guide the design of treatment programs and also may help us better understand the potential benefits and limitations of prevention strategies. Methods To explore further the potential synergies between modelling of drug resistance in HIV and in pandemic influenza, the Public Health Agency of Canada and the Mathematics for Information Technology and Complex Systems brought together selected scientists and public health experts for a workshop in Ottawa in January 2007, to discuss the emergence and transmission of HIV antiviral drug resistance, to report on progress in the use of mathematical models to study the emergence and spread of drug resistant influenza viral strains, and to recommend future research priorities. Results General lectures and round-table discussions were organized around the issues on HIV drug resistance at the population level, HIV drug resistance in Western Canada, HIV drug resistance at the host level (with focus on optimal treatment strategies), and drug resistance for pandemic influenza planning. Conclusion Some of the issues related to drug resistance in HIV and pandemic influenza can possibly be addressed using existing mathematical models, with a special focus on linking the existing models to the data obtained through the Canadian HIV Strain and DR Surveillance Program. Preliminary statistical analysis of these data carried out at PHAC, together with the general model framework developed by Dr. Blower and her collaborators, should provide further insights into the mechanisms behind the observed trends and thus could help with the prediction and analysis of future trends in the aforementioned items. Remarkable similarity between dynamic, compartmental models for the evolution of wild and drug resistance strains of both HIV and pandemic influenza may provide sufficient common ground to create synergies between modellers working in these two areas. One of the key contributions of mathematical modeling to the control of infectious diseases is the quantification and design of optimal strategies, combining techniques of operations research with dynamic modeling would enhance the contribution of mathematical modeling to the prevention and control of infectious diseases. PMID:17953775

  1. Modelling the evolution of drug resistance in the presence of antiviral drugs.

    PubMed

    Wu, Jianhong; Yan, Ping; Archibald, Chris

    2007-10-23

    The emergence of drug resistance in treated populations and the transmission of drug resistant strains to newly infected individuals are important public health concerns in the prevention and control of infectious diseases such as HIV and influenza. Mathematical modelling may help guide the design of treatment programs and also may help us better understand the potential benefits and limitations of prevention strategies. To explore further the potential synergies between modelling of drug resistance in HIV and in pandemic influenza, the Public Health Agency of Canada and the Mathematics for Information Technology and Complex Systems brought together selected scientists and public health experts for a workshop in Ottawa in January 2007, to discuss the emergence and transmission of HIV antiviral drug resistance, to report on progress in the use of mathematical models to study the emergence and spread of drug resistant influenza viral strains, and to recommend future research priorities. General lectures and round-table discussions were organized around the issues on HIV drug resistance at the population level, HIV drug resistance in Western Canada, HIV drug resistance at the host level (with focus on optimal treatment strategies), and drug resistance for pandemic influenza planning. Some of the issues related to drug resistance in HIV and pandemic influenza can possibly be addressed using existing mathematical models, with a special focus on linking the existing models to the data obtained through the Canadian HIV Strain and DR Surveillance Program. Preliminary statistical analysis of these data carried out at PHAC, together with the general model framework developed by Dr. Blower and her collaborators, should provide further insights into the mechanisms behind the observed trends and thus could help with the prediction and analysis of future trends in the aforementioned items. Remarkable similarity between dynamic, compartmental models for the evolution of wild and drug resistance strains of both HIV and pandemic influenza may provide sufficient common ground to create synergies between modellers working in these two areas. One of the key contributions of mathematical modeling to the control of infectious diseases is the quantification and design of optimal strategies, combining techniques of operations research with dynamic modeling would enhance the contribution of mathematical modeling to the prevention and control of infectious diseases.

  2. Optimal starting conditions for the rendezvous maneuver: Analytical and computational approach

    NASA Astrophysics Data System (ADS)

    Ciarcia, Marco

    The three-dimensional rendezvous between two spacecraft is considered: a target spacecraft on a circular orbit around the Earth and a chaser spacecraft initially on some elliptical orbit yet to be determined. The chaser spacecraft has variable mass, limited thrust, and its trajectory is governed by three controls, one determining the thrust magnitude and two determining the thrust direction. We seek the time history of the controls in such a way that the propellant mass required to execute the rendezvous maneuver is minimized. Two cases are considered: (i) time-to-rendezvous free and (ii) time-to-rendezvous given, respectively equivalent to (i) free angular travel and (ii) fixed angular travel for the target spacecraft. The above problem has been studied by several authors under the assumption that the initial separation coordinates and the initial separation velocities are given, hence known initial conditions for the chaser spacecraft. In this paper, it is assumed that both the initial separation coordinates and initial separation velocities are free except for the requirement that the initial chaser-to-target distance is given so as to prevent the occurrence of trivial solutions. Two approaches are employed: optimal control formulation (Part A) and mathematical programming formulation (Part B). In Part A, analyses are performed with the multiple-subarc sequential gradient-restoration algorithm for optimal control problems. They show that the fuel-optimal trajectory is zero-bang, namely it is characterized by two subarcs: a long coasting zero-thrust subarc followed by a short powered max-thrust braking subarc. While the thrust direction of the powered subarc is continuously variable for the optimal trajectory, its replacement with a constant (yet optimized) thrust direction produces a very efficient guidance trajectory. Indeed, for all values of the initial distance, the fuel required by the guidance trajectory is within less than one percent of the fuel required by the optimal trajectory. For the guidance trajectory, because of the replacement of the variable thrust direction of the powered subarc with a constant thrust direction, the optimal control problem degenerates into a mathematical programming problem with a relatively small number of degrees of freedom, more precisely: three for case (i) time-to-rendezvous free and two for case (ii) time-to-rendezvous given. In particular, we consider the rendezvous between the Space Shuttle (chaser) and the International Space Station (target). Once a given initial distance SS-to-ISS is preselected, the present work supplies not only the best initial conditions for the rendezvous trajectory, but simultaneously the corresponding final conditions for the ascent trajectory. In Part B, an analytical solution of the Clohessy-Wiltshire equations is presented (i) neglecting the change of the spacecraft mass due to the fuel consumption and (ii) and assuming that the thrust is finite, that is, the trajectory includes powered subarcs flown with max thrust and coasting subarc flown with zero thrust. Then, employing the found analytical solution, we study the rendezvous problem under the assumption that the initial separation coordinates and initial separation velocities are free except for the requirement that the initial chaser-to-target distance is given. The main contribution of Part B is the development of analytical solutions for the powered subarcs, an important extension of the analytical solutions already available for the coasting subarcs. One consequence is that the entire optimal trajectory can be described analytically. Another consequence is that the optimal control problems degenerate into mathematical programming problems. A further consequence is that, vis-a-vis the optimal control formulation, the mathematical programming formulation reduces the CPU time by a factor of order 1000. Key words. Space trajectories, rendezvous, optimization, guidance, optimal control, calculus of variations, Mayer problems, Bolza problems, transformation techniques, multiple-subarc sequential gradient-restoration algorithm.

  3. Application of Box-Behnken design to prepare gentamicin-loaded calcium carbonate nanoparticles.

    PubMed

    Maleki Dizaj, Solmaz; Lotfipour, Farzaneh; Barzegar-Jalali, Mohammad; Zarrintan, Mohammad-Hossein; Adibkia, Khosro

    2016-09-01

    The aim of this research was to prepare and optimize calcium carbonate (CaCO3) nanoparticles as carriers for gentamicin sulfate. A chemical precipitation method was used to prepare the gentamicin sulfate-loaded CaCO3 nanoparticles. A 3-factor, 3-level Box-Behnken design was used for the optimization procedure, with the molar ratio of CaCl2: Na2CO3 (X1), the concentration of drug (X2), and the speed of homogenization (X3) as the independent variables. The particle size and entrapment efficiency were considered as response variables. Mathematical equations and response surface plots were used, along with the counter plots, to relate the dependent and independent variables. The results indicated that the speed of homogenization was the main variable contributing to particle size and entrapment efficiency. The combined effect of all three independent variables was also evaluated. Using the response optimization design, the optimized Xl-X3 levels were predicted. An optimized formulation was then prepared according to these levels, resulting in a particle size of 80.23 nm and an entrapment efficiency of 30.80%. It was concluded that the chemical precipitation technique, together with the Box-Behnken experimental design methodology, could be successfully used to optimize the formulation of drug-incorporated calcium carbonate nanoparticles.

  4. Nonlinear programming extensions to rational function approximations of unsteady aerodynamics

    NASA Technical Reports Server (NTRS)

    Tiffany, Sherwood H.; Adams, William M., Jr.

    1987-01-01

    This paper deals with approximating unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft. Two methods of formulating these approximations are extended to include both the same flexibility in constraining them and the same methodology in optimizing nonlinear parameters as another currently used 'extended least-squares' method. Optimal selection of 'nonlinear' parameters is made in each of the three methods by use of the same nonlinear (nongradient) optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is of lower order than that required when no optimization of the nonlinear terms is performed. The free 'linear' parameters are determined using least-squares matrix techniques on a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from the different approaches are described, and results are presented which show comparative evaluations from application of each of the extended methods to a numerical example. The results obtained for the example problem show a significant (up to 63 percent) reduction in the number of differential equations used to represent the unsteady aerodynamic forces in linear time-invariant equations of motion as compared to a conventional method in which nonlinear terms are not optimized.

  5. Combinatorial optimization in foundry practice

    NASA Astrophysics Data System (ADS)

    Antamoshkin, A. N.; Masich, I. S.

    2016-04-01

    The multicriteria mathematical model of foundry production capacity planning is suggested in the paper. The model is produced in terms of pseudo-Boolean optimization theory. Different search optimization methods were used to solve the obtained problem.

  6. Optimal quality control of bakers' yeast fed-batch culture using population dynamics.

    PubMed

    Dairaku, K; Izumoto, E; Morikawa, H; Shioya, S; Takamatsu, T

    1982-12-01

    An optimal quality control policy for the overall specific growth rate of bakers' yeast, which maximizes the fermentative activity in the making of bread, was obtained by direct searching based on the mathematical model proposed previously. The mathematical model had described the age distribution of bakers' yeast which had an essential relationship to the ability of fermentation in the making of bread. The mathematical model is a simple aging model with two periods: Nonbudding and budding. Based on the result obtained by direct searching, the quality control of bakers' yeast fed-batch culture was performed and confirmed to be experimentally valid.

  7. Chickpea seeds germination rational parameters optimization

    NASA Astrophysics Data System (ADS)

    Safonova, Yu A.; Ivliev, M. N.; Lemeshkin, A. V.

    2018-05-01

    The paper presents the influence of chickpea seeds bioactivation parameters on their enzymatic activity experimental results. Optimal bioactivation process modes were obtained by regression-factor analysis: process temperature - 13.6 °C, process duration - 71.5 h. It was found that in the germination process, the proteolytic, amylolytic and lipolytic enzymes activity increased, and the urease enzyme activity is reduced. The dependences of enzyme activity on chickpea seeds germination conditions were obtained by mathematical processing of experimental data. The calculated data are in good agreement with the experimental ones. This confirms the optimization efficiency based on experiments mathematical planning in order to determine the enzymatic activity of chickpea seeds germination optimal parameters of bioactivated seeds.

  8. Assisting Pupils in Mathematics Achievement (The Common Core Standards)

    ERIC Educational Resources Information Center

    Ediger, Marlow

    2011-01-01

    Mathematics teachers must expect reasonably high standards of achievement from pupils. Too frequently, pupils attain at a substandard level and more optimal achievement is necessary. Thus, pupils should have self esteem needs met in the school and classroom setting. Thus, learners feel that mathematics is worthwhile and effort must be put forth to…

  9. A Snowflake Project: Calculating, Analyzing, and Optimizing with the Koch Snowflake.

    ERIC Educational Resources Information Center

    Bolte, Linda A.

    2002-01-01

    Presents a project that addresses several components of the Algebra and Communication Standards for Grades 9-12 presented in Principles and Standards for School Mathematics (NCTM, 2000). Describes doing mathematical modeling and using the language of mathematics to express a recursive relationship in the perimeter and area of the Koch snowflake.…

  10. Mathematical modeling for novel cancer drug discovery and development.

    PubMed

    Zhang, Ping; Brusic, Vladimir

    2014-10-01

    Mathematical modeling enables: the in silico classification of cancers, the prediction of disease outcomes, optimization of therapy, identification of promising drug targets and prediction of resistance to anticancer drugs. In silico pre-screened drug targets can be validated by a small number of carefully selected experiments. This review discusses the basics of mathematical modeling in cancer drug discovery and development. The topics include in silico discovery of novel molecular drug targets, optimization of immunotherapies, personalized medicine and guiding preclinical and clinical trials. Breast cancer has been used to demonstrate the applications of mathematical modeling in cancer diagnostics, the identification of high-risk population, cancer screening strategies, prediction of tumor growth and guiding cancer treatment. Mathematical models are the key components of the toolkit used in the fight against cancer. The combinatorial complexity of new drugs discovery is enormous, making systematic drug discovery, by experimentation, alone difficult if not impossible. The biggest challenges include seamless integration of growing data, information and knowledge, and making them available for a multiplicity of analyses. Mathematical models are essential for bringing cancer drug discovery into the era of Omics, Big Data and personalized medicine.

  11. A mathematical model on the optimal timing of offspring desertion.

    PubMed

    Seno, Hiromi; Endo, Hiromi

    2007-06-07

    We consider the offspring desertion as the optimal strategy for the deserter parent, analyzing a mathematical model for its expected reproductive success. It is shown that the optimality of the offspring desertion significantly depends on the offsprings' birth timing in the mating season, and on the other ecological parameters characterizing the innate nature of considered animals. Especially, the desertion is less likely to occur for the offsprings born in the later period of mating season. It is also implied that the offspring desertion after a partially biparental care would be observable only with a specific condition.

  12. Decision science and cervical cancer.

    PubMed

    Cantor, Scott B; Fahs, Marianne C; Mandelblatt, Jeanne S; Myers, Evan R; Sanders, Gillian D

    2003-11-01

    Mathematical modeling is an effective tool for guiding cervical cancer screening, diagnosis, and treatment decisions for patients and policymakers. This article describes the use of mathematical modeling as outlined in five presentations from the Decision Science and Cervical Cancer session of the Second International Conference on Cervical Cancer held at The University of Texas M. D. Anderson Cancer Center, April 11-14, 2002. The authors provide an overview of mathematical modeling, especially decision analysis and cost-effectiveness analysis, and examples of how it can be used for clinical decision making regarding the prevention, diagnosis, and treatment of cervical cancer. Included are applications as well as theory regarding decision science and cervical cancer. Mathematical modeling can answer such questions as the optimal frequency for screening, the optimal age to stop screening, and the optimal way to diagnose cervical cancer. Results from one mathematical model demonstrated that a vaccine against high-risk strains of human papillomavirus was a cost-effective use of resources, and discussion of another model demonstrated the importance of collecting direct non-health care costs and time costs for cost-effectiveness analysis. Research presented indicated that care must be taken when applying the results of population-wide, cost-effectiveness analyses to reduce health disparities. Mathematical modeling can encompass a variety of theoretical and applied issues regarding decision science and cervical cancer. The ultimate objective of using decision-analytic and cost-effectiveness models is to identify ways to improve women's health at an economically reasonable cost. Copyright 2003 American Cancer Society.

  13. Mathematics Competency for Beginning Chemistry Students Through Dimensional Analysis.

    PubMed

    Pursell, David P; Forlemu, Neville Y; Anagho, Leonard E

    2017-01-01

    Mathematics competency in nursing education and practice may be addressed by an instructional variation of the traditional dimensional analysis technique typically presented in beginning chemistry courses. The authors studied 73 beginning chemistry students using the typical dimensional analysis technique and the variation technique. Student quantitative problem-solving performance was evaluated. Students using the variation technique scored significantly better (18.3 of 20 points, p < .0001) on the final examination quantitative titration problem than those who used the typical technique (10.9 of 20 points). American Chemical Society examination scores and in-house assessment indicate that better performing beginning chemistry students were more likely to use the variation technique rather than the typical technique. The variation technique may be useful as an alternative instructional approach to enhance beginning chemistry students' mathematics competency and problem-solving ability in both education and practice. [J Nurs Educ. 2017;56(1):22-26.]. Copyright 2017, SLACK Incorporated.

  14. Perfection Of Methods Of Mathematical Analysis For Increasing The Completeness Of Subsoil Development

    NASA Astrophysics Data System (ADS)

    Fokina, Mariya

    2017-11-01

    The economy of Russia is based around the mineral-raw material complex to the highest degree. The mining industry is a prioritized and important area. Given the high competitiveness of businesses in this sector, increasing the efficiency of completed work and manufactured products will become a central issue. Improvement of planning and management in this sector should be based on multivariant study and the optimization of planning decisions, the appraisal of their immediate and long-term results, taking the dynamic of economic development into account. All of this requires the use of economic mathematic models and methodsApplying an economic-mathematic model to determine optimal ore mine production capacity, we receive a figure of 4,712,000 tons. The production capacity of the Uchalinsky ore mine is 1560 thousand tons, and the Uzelginsky ore mine - 3650 thousand. Conducting a corresponding analysis of the production of OAO "Uchalinsky Gok", an optimal production plan was received: the optimal production of copper - 77961,4 rubles; the optimal production of zinc - 17975.66 rubles. The residual production volume of the two main ore mines of OAO "UGOK" is 160 million tons of ore.

  15. Concerning the electrosynthesis of hydrogen peroxide and peroxodisulfates. Section 2: Optimization of electrolysis cells using an electrolyzer for peroxodisulfuric acid as an example

    NASA Technical Reports Server (NTRS)

    Schleiff, M.; Thiele, W.; Matschiner, H.

    1986-01-01

    The model is presented of an electrolyzer for peroxodisulfuric acid, and it is analyzed mathematically. Its application for engineering and economic optimization is investigated in detail. The mathematical analysis leads to conclusions concerning the change in position of the optimum with respect to the various target functions due to changes of the individual design-caused and economic parameters.

  16. Tailoring High Order Time Discretizations for Use with Spatial Discretizations of Hyperbolic PDEs

    DTIC Science & Technology

    2015-05-19

    Duration of Grant Sigal Gottlieb, Professor of Mathematics, UMass Dartmouth. Daniel Higgs , Graduate Student, UMass Dartmouth. Zachary Grant, Undergraduate...Grant, and D. Higgs , “Optimal Explicit Strong Stability Preserving Runge– Kutta Methods with High Linear Order and optimal Nonlinear Order.” Accepted...for publica- tion in Mathematics of Computation. Available on Arxiv at http://arxiv.org/abs/1403. 6519 4. C. Bresten, S. Gottlieb, Z. Grant, D. Higgs

  17. Tunable wavefront coded imaging system based on detachable phase mask: Mathematical analysis, optimization and underlying applications

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Wei, Jingxuan

    2014-09-01

    The key to the concept of tunable wavefront coding lies in detachable phase masks. Ojeda-Castaneda et al. (Progress in Electronics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010) described a typical design in which two components with cosinusoidal phase variation operate together to make defocus sensitivity tunable. The present study proposes an improved design and makes three contributions: (1) A mathematical derivation based on the stationary phase method explains why the detachable phase mask of Ojeda-Castaneda et al. tunes the defocus sensitivity. (2) The mathematical derivations show that the effective bandwidth wavefront coded imaging system is also tunable by making each component of the detachable phase mask move asymmetrically. An improved Fisher information-based optimization procedure was also designed to ascertain the optimal mask parameters corresponding to specific bandwidth. (3) Possible applications of the tunable bandwidth are demonstrated by simulated imaging.

  18. Birationality and Landau-Ginzburg Models

    NASA Astrophysics Data System (ADS)

    Clarke, Patrick

    2017-08-01

    We introduce a new technique for approaching birationality questions that arise in the mirror symmetry of complete intersections in toric varieties. As an application we answer affirmatively and conclusively the question of Batyrev-Nill (Integer points in polyhedra—geometry, number theory, representation theory, algebra, optimization, statistics, volume 452 of Contemporary mathematics. American Mathematical Society, Providence, pp 35-66, 2008) about the birationality of Calabi-Yau families associated to multiple mirror nef-partitions. This completes the progress in this direction made by Li's breakthrough (Li in Adv Math 299:71-107, 2016). In the process, we obtain results in the theory of Borisov's nef-partitions (Borisov in Towards the mirror symmetry for Calabi-Yau complete intersections in Gorenstein toric Fano varieties, 1993. arXiv:alg-geom/9310001 ) and provide new insight into the geometric content of the multiple mirror phenomenon.

  19. Determining anisotropic conductivity using diffusion tensor imaging data in magneto-acoustic tomography with magnetic induction

    NASA Astrophysics Data System (ADS)

    Ammari, Habib; Qiu, Lingyun; Santosa, Fadil; Zhang, Wenlong

    2017-12-01

    In this paper we present a mathematical and numerical framework for a procedure of imaging anisotropic electrical conductivity tensor by integrating magneto-acoutic tomography with data acquired from diffusion tensor imaging. Magneto-acoustic tomography with magnetic induction (MAT-MI) is a hybrid, non-invasive medical imaging technique to produce conductivity images with improved spatial resolution and accuracy. Diffusion tensor imaging (DTI) is also a non-invasive technique for characterizing the diffusion properties of water molecules in tissues. We propose a model for anisotropic conductivity in which the conductivity is proportional to the diffusion tensor. Under this assumption, we propose an optimal control approach for reconstructing the anisotropic electrical conductivity tensor. We prove convergence and Lipschitz type stability of the algorithm and present numerical examples to illustrate its accuracy and feasibility.

  20. Statistical Mechanics of Coherent Ising Machine — The Case of Ferromagnetic and Finite-Loading Hopfield Models —

    NASA Astrophysics Data System (ADS)

    Aonishi, Toru; Mimura, Kazushi; Utsunomiya, Shoko; Okada, Masato; Yamamoto, Yoshihisa

    2017-10-01

    The coherent Ising machine (CIM) has attracted attention as one of the most effective Ising computing architectures for solving large scale optimization problems because of its scalability and high-speed computational ability. However, it is difficult to implement the Ising computation in the CIM because the theories and techniques of classical thermodynamic equilibrium Ising spin systems cannot be directly applied to the CIM. This means we have to adapt these theories and techniques to the CIM. Here we focus on a ferromagnetic model and a finite loading Hopfield model, which are canonical models sharing a common mathematical structure with almost all other Ising models. We derive macroscopic equations to capture nonequilibrium phase transitions in these models. The statistical mechanical methods developed here constitute a basis for constructing evaluation methods for other Ising computation models.

  1. Mathematical Geology.

    ERIC Educational Resources Information Center

    Jones, Thomas A.

    1983-01-01

    Mathematical techniques used to solve geological problems are briefly discussed (including comments on use of geostatistics). Highlights of conferences/meetings and conference papers in mathematical geology are also provided. (JN)

  2. Estimation of in-situ bioremediation system cost using a hybrid Extreme Learning Machine (ELM)-particle swarm optimization approach

    NASA Astrophysics Data System (ADS)

    Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan

    2016-12-01

    In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum while successfully satisfying all the regulatory constraints of the contaminated site.

  3. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    PubMed

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.

  4. Engaging with the Art & Science of Statistics

    ERIC Educational Resources Information Center

    Peters, Susan A.

    2010-01-01

    How can statistics clearly be mathematical and yet distinct from mathematics? The answer lies in the reality that statistics is both an art and a science, and both aspects are important for teaching and learning statistics. Statistics is a mathematical science in that it applies mathematical theories and techniques. Mathematics provides the…

  5. Mathematical Creativity and Mathematical Aptitude: A Cross-Lagged Panel Analysis

    ERIC Educational Resources Information Center

    Tyagi, Tarun Kumar

    2016-01-01

    Cross-lagged panel correlation (CLPC) analysis has been used to identify causal relationships between mathematical creativity and mathematical aptitude. For this study, 480 8th standard students were selected through a random cluster technique from 9 intermediate and high schools of Varanasi, India. Mathematical creativity and mathematical…

  6. Examining Mechanical Strength Characteristics of Selective Inhibition Sintered HDPE Specimens Using RSM and Desirability Approach

    NASA Astrophysics Data System (ADS)

    Rajamani, D.; Esakki, Balasubramanian

    2017-09-01

    Selective inhibition sintering (SIS) is a powder based additive manufacturing (AM) technique to produce functional parts with an inexpensive system compared with other AM processes. Mechanical properties of SIS fabricated parts are of high dependence on various process parameters importantly layer thickness, heat energy, heater feedrate, and printer feedrate. In this paper, examining the influence of these process parameters on evaluating mechanical properties such as tensile and flexural strength using Response Surface Methodology (RSM) is carried out. The test specimens are fabricated using high density polyethylene (HDPE) and mathematical models are developed to correlate the control factors to the respective experimental design response. Further, optimal SIS process parameters are determined using desirability approach to enhance the mechanical properties of HDPE specimens. Optimization studies reveal that, combination of high heat energy, low layer thickness, medium heater feedrate and printer feedrate yielded superior mechanical strength characteristics.

  7. Three Program Architecture for Design Optimization

    NASA Technical Reports Server (NTRS)

    Miura, Hirokazu; Olson, Lawrence E. (Technical Monitor)

    1998-01-01

    In this presentation, I would like to review historical perspective on the program architecture used to build design optimization capabilities based on mathematical programming and other numerical search techniques. It is rather straightforward to classify the program architecture in three categories as shown above. However, the relative importance of each of the three approaches has not been static, instead dynamically changing as the capabilities of available computational resource increases. For example, we considered that the direct coupling architecture would never be used for practical problems, but availability of such computer systems as multi-processor. In this presentation, I would like to review the roles of three architecture from historical as well as current and future perspective. There may also be some possibility for emergence of hybrid architecture. I hope to provide some seeds for active discussion where we are heading to in the very dynamic environment for high speed computing and communication.

  8. Rival approaches to mathematical modelling in immunology

    NASA Astrophysics Data System (ADS)

    Andrew, Sarah M.; Baker, Christopher T. H.; Bocharov, Gennady A.

    2007-08-01

    In order to formulate quantitatively correct mathematical models of the immune system, one requires an understanding of immune processes and familiarity with a range of mathematical techniques. Selection of an appropriate model requires a number of decisions to be made, including a choice of the modelling objectives, strategies and techniques and the types of model considered as candidate models. The authors adopt a multidisciplinary perspective.

  9. An optimized computational method for determining the beta dose distribution using a multiple-element thermoluminescent dosimeter system.

    PubMed

    Shen, L; Levine, S H; Catchen, G L

    1987-07-01

    This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration.

  10. A novel medical information management and decision model for uncertain demand optimization.

    PubMed

    Bi, Ya

    2015-01-01

    Accurately planning the procurement volume is an effective measure for controlling the medicine inventory cost. Due to uncertain demand it is difficult to make accurate decision on procurement volume. As to the biomedicine sensitive to time and season demand, the uncertain demand fitted by the fuzzy mathematics method is obviously better than general random distribution functions. To establish a novel medical information management and decision model for uncertain demand optimization. A novel optimal management and decision model under uncertain demand has been presented based on fuzzy mathematics and a new comprehensive improved particle swarm algorithm. The optimal management and decision model can effectively reduce the medicine inventory cost. The proposed improved particle swarm optimization is a simple and effective algorithm to improve the Fuzzy interference and hence effectively reduce the calculation complexity of the optimal management and decision model. Therefore the new model can be used for accurate decision on procurement volume under uncertain demand.

  11. Assessing Mathematics Self-Efficacy: How Many Categories Do We Really Need?

    ERIC Educational Resources Information Center

    Toland, Michael D.; Usher, Ellen L.

    2016-01-01

    The present study tested whether a reduced number of categories is optimal for assessing mathematics self-efficacy among middle school students using a 6-point Likert-type format or a 0- to 100-point format. Two independent samples of middle school adolescents (N = 1,913) were administered a 24-item Middle School Mathematics Self-Efficacy Scale…

  12. How Revisions to Mathematical Visuals Affect Cognition: Evidence from Eye Tracking

    ERIC Educational Resources Information Center

    Clinton, Virginia; Cooper, Jennifer L.; Michaelis, Joseph; Alibali, Martha W.; Nathan, Mitchell J.

    2017-01-01

    Mathematics curricula are frequently rich with visuals, but these visuals are often not designed for optimal use of students' limited cognitive resources. The authors of this study revised the visuals in a mathematics lesson based on instructional design principles. The purpose of this study is to examine the effects of these revised visuals on…

  13. An Eddy Current Testing Platform System for Pipe Defect Inspection Based on an Optimized Eddy Current Technique Probe Design.

    PubMed

    Rifai, Damhuji; Abdalla, Ahmed N; Razali, Ramdan; Ali, Kharudin; Faraj, Moneer A

    2017-03-13

    The use of the eddy current technique (ECT) for the non-destructive testing of conducting materials has become increasingly important in the past few years. The use of the non-destructive ECT plays a key role in the ensuring the safety and integrity of the large industrial structures such as oil and gas pipelines. This paper introduce a novel ECT probe design integrated with the distributed ECT inspection system (DSECT) use for crack inspection on inner ferromagnetic pipes. The system consists of an array of giant magneto-resistive (GMR) sensors, a pneumatic system, a rotating magnetic field excitation source and a host PC acting as the data analysis center. Probe design parameters, namely probe diameter, an excitation coil and the number of GMR sensors in the array sensor is optimized using numerical optimization based on the desirability approach. The main benefits of DSECT can be seen in terms of its modularity and flexibility for the use of different types of magnetic transducers/sensors, and signals of a different nature with either digital or analog outputs, making it suited for the ECT probe design using an array of GMR magnetic sensors. A real-time application of the DSECT distributed system for ECT inspection can be exploited for the inspection of 70 mm carbon steel pipe. In order to predict the axial and circumference defect detection, a mathematical model is developed based on the technique known as response surface methodology (RSM). The inspection results of a carbon steel pipe sample with artificial defects indicate that the system design is highly efficient.

  14. Nonnegative constraint quadratic program technique to enhance the resolution of γ spectra

    NASA Astrophysics Data System (ADS)

    Li, Jinglun; Xiao, Wuyun; Ai, Xianyun; Chen, Ye

    2018-04-01

    Two concepts of the nonnegative least squares problem (NNLS) and the linear complementarity problem (LCP) are introduced for the resolution enhancement of the γ spectra. The respective algorithms such as the active set method and the primal-dual interior point method are applied to solve the above two problems. In mathematics, the nonnegative constraint results in the sparsity of the optimal solution of the deconvolution, and it is this sparsity that enhances the resolution. Finally, a comparison in the peak position accuracy and the computation time is made between these two methods and the boosted L_R and Gold methods.

  15. Travel Demand Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Southworth, Frank; Garrow, Dr. Laurie

    This chapter describes the principal types of both passenger and freight demand models in use today, providing a brief history of model development supported by references to a number of popular texts on the subject, and directing the reader to papers covering some of the more recent technical developments in the area. Over the past half century a variety of methods have been used to estimate and forecast travel demands, drawing concepts from economic/utility maximization theory, transportation system optimization and spatial interaction theory, using and often combining solution techniques as varied as Box-Jenkins methods, non-linear multivariate regression, non-linear mathematical programming,more » and agent-based microsimulation.« less

  16. A design procedure for the handling qualities optimization of the X-29A aircraft

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.; Cox, Timothy H.

    1989-01-01

    The techniques used to improve the pitch-axis handling qualities of the X-29A wing-canard-planform fighter aircraft are reviewed. The aircraft and its FCS are briefly described, and the design method, which works within the existing FCS architecture, is characterized in detail. Consideration is given to the selection of design goals and design variables, the definition and calculation of the cost function, the validation of the mathematical model on the basis of flight-test data, and the validation of the improved design by means of nonlinear simulations. Flight tests of the improved design are shown to verify the simulation results.

  17. Recourse-based facility-location problems in hybrid uncertain environment.

    PubMed

    Wang, Shuming; Watada, Junzo; Pedrycz, Witold

    2010-08-01

    The objective of this paper is to study facility-location problems in the presence of a hybrid uncertain environment involving both randomness and fuzziness. A two-stage fuzzy-random facility-location model with recourse (FR-FLMR) is developed in which both the demands and costs are assumed to be fuzzy-random variables. The bounds of the optimal objective value of the two-stage FR-FLMR are derived. As, in general, the fuzzy-random parameters of the FR-FLMR can be regarded as continuous fuzzy-random variables with an infinite number of realizations, the computation of the recourse requires solving infinite second-stage programming problems. Owing to this requirement, the recourse function cannot be determined analytically, and, hence, the model cannot benefit from the use of techniques of classical mathematical programming. In order to solve the location problems of this nature, we first develop a technique of fuzzy-random simulation to compute the recourse function. The convergence of such simulation scenarios is discussed. In the sequel, we propose a hybrid mutation-based binary ant-colony optimization (MBACO) approach to the two-stage FR-FLMR, which comprises the fuzzy-random simulation and the simplex algorithm. A numerical experiment illustrates the application of the hybrid MBACO algorithm. The comparison shows that the hybrid MBACO finds better solutions than the one using other discrete metaheuristic algorithms, such as binary particle-swarm optimization, genetic algorithm, and tabu search.

  18. Planning and executing motions for multibody systems in free-fall. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cameron, Jonathan M.

    1991-01-01

    The purpose of this research is to develop an end-to-end system that can be applied to a multibody system in free-fall to analyze its possible motions, save those motions in a database, and design a controller that can execute those motions. A goal is for the process to be highly automated and involve little human intervention. Ideally, the output of the system would be data and algorithms that could be put in ROM to control the multibody system in free-fall. The research applies to more than just robots in space. It applies to any multibody system in free-fall. Mathematical techniques from nonlinear control theory were used to study the nature of the system dynamics and its possible motions. Optimization techniques were applied to plan motions. Image compression techniques were proposed to compress the precomputed motion data for storage. A linearized controller was derived to control the system while it executes preplanned trajectories.

  19. Computerized optimization of radioimmunoassays for hCG and estradiol: an experimental evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yanagishita, M.; Rodbard, D.

    1978-07-15

    The mathematical and statistical theory of radioimmunoassays (RIAs) has been used to develop a series of computer programs to optimize sensitivity or precision at any desired dose level for either equilibrium or nonequilibrium assays. These computer programs provide for the calculation of the equilibrium constants of association and binding capacities for antisera (parameters of Scatchard plots), the association and dissociation rate constants, and prediction of optimum concentration of labeled ligand and antibody and optimum incubation times for the assay. This paper presents an experimental evaluation of the use of these computer programs applied to RIAs for human chorionic gonadotropin (hCG)more » and estradiol. The experimental results are in reasonable semiquantitative agreement with the predictions of the computer simulations (usually within a factor of two) and thus partially validate the use of computer techniques to optimize RIAs that are reasonably well behaved, as in the case of the hCG and estradiol RIAs. Further, these programs can provide insights into the nature of the RIA system, e.g., the general nature of the sensitivity and precision surfaces. This facilitates empirical optimization of conditions.« less

  20. A Multi-Objective Optimization Technique to Model the Pareto Front of Organic Dielectric Polymers

    NASA Astrophysics Data System (ADS)

    Gubernatis, J. E.; Mannodi-Kanakkithodi, A.; Ramprasad, R.; Pilania, G.; Lookman, T.

    Multi-objective optimization is an area of decision making that is concerned with mathematical optimization problems involving more than one objective simultaneously. Here we describe two new Monte Carlo methods for this type of optimization in the context of their application to the problem of designing polymers with more desirable dielectric and optical properties. We present results of applying these Monte Carlo methods to a two-objective problem (maximizing the total static band dielectric constant and energy gap) and a three objective problem (maximizing the ionic and electronic contributions to the static band dielectric constant and energy gap) of a 6-block organic polymer. Our objective functions were constructed from high throughput DFT calculations of 4-block polymers, following the method of Sharma et al., Nature Communications 5, 4845 (2014) and Mannodi-Kanakkithodi et al., Scientific Reports, submitted. Our high throughput and Monte Carlo methods of analysis extend to general N-block organic polymers. This work was supported in part by the LDRD DR program of the Los Alamos National Laboratory and in part by a Multidisciplinary University Research Initiative (MURI) Grant from the Office of Naval Research.

  1. Modelling and optimization of semi-solid processing of 7075 Al alloy

    NASA Astrophysics Data System (ADS)

    Binesh, B.; Aghaie-Khafri, M.

    2017-09-01

    The new modified strain-induced melt activation (SIMA) process presented by Binesh and Aghaie-Khafri was optimized using a response surface methodology to improve the thixotropic characteristics of semi-solid 7075 alloy. The responses, namely the average grain size and the shape factor, were considered as functions of three independent input variables: effective strain, isothermal holding temperature and time. Mathematical models for the responses were developed using the regression analysis technique, and the adequacy of the models was validated by the analysis of variance method. The calculated results correlated fairly well with the experiments. It was found that all the first- and second-order terms of the independent parameters and the interactive terms of the effective strain and holding time were statistically significant for the responses. In order to simultaneously optimize the responses, the desirable values for the effective strain, holding temperature and time were predicted to be 5.1, 609 °C and 14 min, respectively, when employing the desirability function approach. Based on the optimization results, a significant improvement in the average grain size and shape factor of the semi-solid slurry prepared by the new modified SIMA process was observed.

  2. [Applications of mathematical statistics methods on compatibility researches of traditional Chinese medicines formulae].

    PubMed

    Mai, Lan-Yin; Li, Yi-Xuan; Chen, Yong; Xie, Zhen; Li, Jie; Zhong, Ming-Yu

    2014-05-01

    The compatibility of traditional Chinese medicines (TCMs) formulae containing enormous information, is a complex component system. Applications of mathematical statistics methods on the compatibility researches of traditional Chinese medicines formulae have great significance for promoting the modernization of traditional Chinese medicines and improving clinical efficacies and optimizations of formulae. As a tool for quantitative analysis, data inference and exploring inherent rules of substances, the mathematical statistics method can be used to reveal the working mechanisms of the compatibility of traditional Chinese medicines formulae in qualitatively and quantitatively. By reviewing studies based on the applications of mathematical statistics methods, this paper were summarized from perspective of dosages optimization, efficacies and changes of chemical components as well as the rules of incompatibility and contraindication of formulae, will provide the references for further studying and revealing the working mechanisms and the connotations of traditional Chinese medicines.

  3. A PC program to optimize system configuration for desired reliability at minimum cost

    NASA Technical Reports Server (NTRS)

    Hills, Steven W.; Siahpush, Ali S.

    1994-01-01

    High reliability is desired in all engineered systems. One way to improve system reliability is to use redundant components. When redundant components are used, the problem becomes one of allocating them to achieve the best reliability without exceeding other design constraints such as cost, weight, or volume. Systems with few components can be optimized by simply examining every possible combination but the number of combinations for most systems is prohibitive. A computerized iteration of the process is possible but anything short of a super computer requires too much time to be practical. Many researchers have derived mathematical formulations for calculating the optimum configuration directly. However, most of the derivations are based on continuous functions whereas the real system is composed of discrete entities. Therefore, these techniques are approximations of the true optimum solution. This paper describes a computer program that will determine the optimum configuration of a system of multiple redundancy of both standard and optional components. The algorithm is a pair-wise comparative progression technique which can derive the true optimum by calculating only a small fraction of the total number of combinations. A designer can quickly analyze a system with this program on a personal computer.

  4. Landcover Based Optimal Deconvolution of PALS L-band Microwave Brightness Temperature

    NASA Technical Reports Server (NTRS)

    Limaye, Ashutosh S.; Crosson, William L.; Laymon, Charles A.; Njoku, Eni G.

    2004-01-01

    An optimal de-convolution (ODC) technique has been developed to estimate microwave brightness temperatures of agricultural fields using microwave radiometer observations. The technique is applied to airborne measurements taken by the Passive and Active L and S band (PALS) sensor in Iowa during Soil Moisture Experiments in 2002 (SMEX02). Agricultural fields in the study area were predominantly soybeans and corn. The brightness temperatures of corn and soybeans were observed to be significantly different because of large differences in vegetation biomass. PALS observations have significant over-sampling; observations were made about 100 m apart and the sensor footprint extends to about 400 m. Conventionally, observations of this type are averaged to produce smooth spatial data fields of brightness temperatures. However, the conventional approach is in contrast to reality in which the brightness temperatures are in fact strongly dependent on landcover, which is characterized by sharp boundaries. In this study, we mathematically de-convolve the observations into brightness temperature at the field scale (500-800m) using the sensor antenna response function. The result is more accurate spatial representation of field-scale brightness temperatures, which may in turn lead to more accurate soil moisture retrieval.

  5. Genetic algorithms - What fitness scaling is optimal?

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Quintana, Chris; Fuentes, Olac

    1993-01-01

    A problem of choosing the best scaling function as a mathematical optimization problem is formulated and solved under different optimality criteria. A list of functions which are optimal under different criteria is presented which includes both the best functions empirically proved and new functions that may be worth trying.

  6. A hybrid modeling system designed to support decision making in the optimization of extrusion of inhomogeneous materials

    NASA Astrophysics Data System (ADS)

    Kryuchkov, D. I.; Zalazinsky, A. G.

    2017-12-01

    Mathematical models and a hybrid modeling system are developed for the implementation of the experimental-calculation method for the engineering analysis and optimization of the plastic deformation of inhomogeneous materials with the purpose of improving metal-forming processes and machines. The created software solution integrates Abaqus/CAE, a subroutine for mathematical data processing, with the use of Python libraries and the knowledge base. Practical application of the software solution is exemplified by modeling the process of extrusion of a bimetallic billet. The results of the engineering analysis and optimization of the extrusion process are shown, the material damage being monitored.

  7. A generic framework to simulate realistic lung, liver and renal pathologies in CT imaging

    NASA Astrophysics Data System (ADS)

    Solomon, Justin; Samei, Ehsan

    2014-11-01

    Realistic three-dimensional (3D) mathematical models of subtle lesions are essential for many computed tomography (CT) studies focused on performance evaluation and optimization. In this paper, we develop a generic mathematical framework that describes the 3D size, shape, contrast, and contrast-profile characteristics of a lesion, as well as a method to create lesion models based on CT data of real lesions. Further, we implemented a technique to insert the lesion models into CT images in order to create hybrid CT datasets. This framework was used to create a library of realistic lesion models and corresponding hybrid CT images. The goodness of fit of the models was assessed using the coefficient of determination (R2) and the visual appearance of the hybrid images was assessed with an observer study using images of both real and simulated lesions and receiver operator characteristic (ROC) analysis. The average R2 of the lesion models was 0.80, implying that the models provide a good fit to real lesion data. The area under the ROC curve was 0.55, implying that the observers could not readily distinguish between real and simulated lesions. Therefore, we conclude that the lesion-modeling framework presented in this paper can be used to create realistic lesion models and hybrid CT images. These models could be instrumental in performance evaluation and optimization of novel CT systems.

  8. Optimal placement of tuning masses for vibration reduction in helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1988-01-01

    Described are methods for reducing vibration in helicopter rotor blades by determining optimum sizes and locations of tuning masses through formal mathematical optimization techniques. An optimization procedure is developed which employs the tuning masses and corresponding locations as design variables which are systematically changed to achieve low values of shear without a large mass penalty. The finite-element structural analysis of the blade and the optimization formulation require development of discretized expressions for two performance parameters: modal shaping parameter and modal shear amplitude. Matrix expressions for both quantities and their sensitivity derivatives are developed. Three optimization strategies are developed and tested. The first is based on minimizing the modal shaping parameter which indirectly reduces the modal shear amplitudes corresponding to each harmonic of airload. The second strategy reduces these amplitudes directly, and the third strategy reduces the shear as a function of time during a revolution of the blade. The first strategy works well for reducing the shear for one mode responding to a single harmonic of the airload, but has been found in some cases to be ineffective for more than one mode. The second and third strategies give similar results and show excellent reduction of the shear with a low mass penalty.

  9. Central composite rotatable design for investigation of microwave-assisted extraction of okra pod hydrocolloid.

    PubMed

    Samavati, Vahid

    2013-10-01

    Microwave-assisted extraction (MAE) technique was employed to extract the hydrocolloid from okra pods (OPH). The optimal conditions for microwave-assisted extraction of OPH were determined by response surface methodology. A central composite rotatable design (CCRD) was applied to evaluate the effects of three independent variables (microwave power (X1: 100-500 W), extraction time (X2: 30-90 min), and extraction temperature (X3: 40-90 °C)) on the extraction yield of OPH. The correlation analysis of the mathematical-regression model indicated that quadratic polynomial model could be employed to optimize the microwave extraction of OPH. The optimal conditions to obtain the highest recovery of OPH (14.911±0.27%) were as follows: microwave power, 395.56 W; extraction time, 67.11 min and extraction temperature, 73.33 °C. Under these optimal conditions, the experimental values agreed with the predicted ones by analysis of variance. It indicated high fitness of the model used and the success of response surface methodology for optimizing OPH extraction. After method development, the DPPH radical scavenging activity of the OPH was evaluated. MAE showed obvious advantages in terms of high extraction efficiency and radical scavenging activity of extract within the shorter extraction time. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Group investigation with scientific approach in mathematics learning

    NASA Astrophysics Data System (ADS)

    Indarti, D.; Mardiyana; Pramudya, I.

    2018-03-01

    The aim of this research is to find out the effect of learning model toward mathematics achievement. This research is quasi-experimental research. The population of research is all VII grade students of Karanganyar regency in the academic year of 2016/2017. The sample of this research was taken using stratified cluster random sampling technique. Data collection was done based on mathematics achievement test. The data analysis technique used one-way ANOVA following the normality test with liliefors method and homogeneity test with Bartlett method. The results of this research is the mathematics learning using Group Investigation learning model with scientific approach produces the better mathematics learning achievement than learning with conventional model on material of quadrilateral. Group Investigation learning model with scientific approach can be used by the teachers in mathematics learning, especially in the material of quadrilateral, which is can improve the mathematics achievement.

  11. Mathematics learning on geometry for children with autism

    NASA Astrophysics Data System (ADS)

    Widayati, F. E.; Usodo, B.; Pamudya, I.

    2017-12-01

    The purpose of this research is to describe: (1) the mathematics learning process in an inclusion class and (2) the obstacle during the process of mathematics learning in the inclusion class. This research is a descriptive qualitative research. The subjects were a mathematics teacher, children with autism, and a teacher assistant. Method of collecting data was observation and interview. Data validation technique is triangulation technique. The results of this research are : (1) There is a modification of lesson plan for children with autism. This modification such as the indicator of success, material, time, and assessment. Lesson plan for children with autism is arranged by mathematics teacher and teacher assistant. There is no special media for children with autism used by mathematics teacher. (2) The obstacle of children with autism is that they are difficult to understand mathematics concept. Besides, children with autism are easy to lose their focus.

  12. Application of mathematical models to metronomic chemotherapy: What can be inferred from minimal parameterized models?

    PubMed

    Ledzewicz, Urszula; Schättler, Heinz

    2017-08-10

    Metronomic chemotherapy refers to the frequent administration of chemotherapy at relatively low, minimally toxic doses without prolonged treatment interruptions. Different from conventional or maximum-tolerated-dose chemotherapy which aims at an eradication of all malignant cells, in a metronomic dosing the goal often lies in the long-term management of the disease when eradication proves elusive. Mathematical modeling and subsequent analysis (theoretical as well as numerical) have become an increasingly more valuable tool (in silico) both for determining conditions under which specific treatment strategies should be preferred and for numerically optimizing treatment regimens. While elaborate, computationally-driven patient specific schemes that would optimize the timing and drug dose levels are still a part of the future, such procedures may become instrumental in making chemotherapy effective in situations where it currently fails. Ideally, mathematical modeling and analysis will develop into an additional decision making tool in the complicated process that is the determination of efficient chemotherapy regimens. In this article, we review some of the results that have been obtained about metronomic chemotherapy from mathematical models and what they infer about the structure of optimal treatment regimens. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Multi-disciplinary optimization of aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Karpel, Mardechay

    1992-01-01

    The purpose of the research project was to continue the development of new methods for efficient aeroservoelastic analysis and optimization. The main targets were as follows: to complete the development of analytical tools for the investigation of flutter with large stiffness changes; to continue the work on efficient continuous gust response and sensitivity derivatives; and to advance the techniques of calculating dynamic loads with control and unsteady aerodynamic effects. An efficient and highly accurate mathematical model for time-domain analysis of flutter during which large structural changes occur was developed in cooperation with Carol D. Wieseman of NASA LaRC. The model was based on the second-year work 'Modal Coordinates for Aeroelastic Analysis with Large Local Structural Variations'. The work on continuous gust response was completed. An abstract of the paper 'Continuous Gust Response and Sensitivity Derivatives Using State-Space Models' was submitted for presentation in the 33rd Israel Annual Conference on Aviation and Astronautics, Feb. 1993. The abstract is given in Appendix A. The work extends the optimization model to deal with continuous gust objectives in a way that facilitates their inclusion in the efficient multi-disciplinary optimization scheme. Currently under development is a work designed to extend the analysis and optimization capabilities to loads and stress considerations. The work is on aircraft dynamic loads in response to impulsive and non-impulsive excitation. The work extends the formulations of the mode-displacement and summation-of-forces methods to include modes with significant local distortions, and load modes. An abstract of the paper,'Structural Dynamic Loads in Response to Impulsive Excitation' is given in appendix B. Another work performed this year under the Grant was 'Size-Reduction Techniques for the Determination of Efficient Aeroservoelastic Models' given in Appendix C.

  14. A multi-objective programming model for assessment the GHG emissions in MSW management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mavrotas, George, E-mail: mavrotas@chemeng.ntua.gr; Skoulaxinou, Sotiria; Gakis, Nikos

    2013-09-15

    Highlights: • The multi-objective multi-period optimization model. • The solution approach for the generation of the Pareto front with mathematical programming. • The very detailed description of the model (decision variables, parameters, equations). • The use of IPCC 2006 guidelines for landfill emissions (first order decay model) in the mathematical programming formulation. - Abstract: In this study a multi-objective mathematical programming model is developed for taking into account GHG emissions for Municipal Solid Waste (MSW) management. Mathematical programming models are often used for structure, design and operational optimization of various systems (energy, supply chain, processes, etc.). The last twenty yearsmore » they are used all the more often in Municipal Solid Waste (MSW) management in order to provide optimal solutions with the cost objective being the usual driver of the optimization. In our work we consider the GHG emissions as an additional criterion, aiming at a multi-objective approach. The Pareto front (Cost vs. GHG emissions) of the system is generated using an appropriate multi-objective method. This information is essential to the decision maker because he can explore the trade-offs in the Pareto curve and select his most preferred among the Pareto optimal solutions. In the present work a detailed multi-objective, multi-period mathematical programming model is developed in order to describe the waste management problem. Apart from the bi-objective approach, the major innovations of the model are (1) the detailed modeling considering 34 materials and 42 technologies, (2) the detailed calculation of the energy content of the various streams based on the detailed material balances, and (3) the incorporation of the IPCC guidelines for the CH{sub 4} generated in the landfills (first order decay model). The equations of the model are described in full detail. Finally, the whole approach is illustrated with a case study referring to the application of the model in a Greek region.« less

  15. Throughput Optimization of Continuous Biopharmaceutical Manufacturing Facilities.

    PubMed

    Garcia, Fernando A; Vandiver, Michael W

    2017-01-01

    In order to operate profitably under different product demand scenarios, biopharmaceutical companies must design their facilities with mass output flexibility in mind. Traditional biologics manufacturing technologies pose operational challenges in this regard due to their high costs and slow equipment turnaround times, restricting the types of products and mass quantities that can be processed. Modern plant design, however, has facilitated the development of lean and efficient bioprocessing facilities through footprint reduction and adoption of disposable and continuous manufacturing technologies. These development efforts have proven to be crucial in seeking to drastically reduce the high costs typically associated with the manufacturing of recombinant proteins. In this work, mathematical modeling is used to optimize annual production schedules for a single-product commercial facility operating with a continuous upstream and discrete batch downstream platform. Utilizing cell culture duration and volumetric productivity as process variables in the model, and annual plant throughput as the optimization objective, 3-D surface plots are created to understand the effect of process and facility design on expected mass output. The model shows that once a plant has been fully debottlenecked it is capable of processing well over a metric ton of product per year. Moreover, the analysis helped to uncover a major limiting constraint on plant performance, the stability of the neutralized viral inactivated pool, which may indicate that this should be a focus of attention during future process development efforts. LAY ABSTRACT: Biopharmaceutical process modeling can be used to design and optimize manufacturing facilities and help companies achieve a predetermined set of goals. One way to perform optimization is by making the most efficient use of process equipment in order to minimize the expenditure of capital, labor and plant resources. To that end, this paper introduces a novel mathematical algorithm used to determine the most optimal equipment scheduling configuration that maximizes the mass output for a facility producing a single product. The paper also illustrates how different scheduling arrangements can have a profound impact on the availability of plant resources, and identifies limiting constraints on the plant design. In addition, simulation data is presented using visualization techniques that aid in the interpretation of the scientific concepts discussed. © PDA, Inc. 2017.

  16. Power systems locational marginal pricing in deregulated markets

    NASA Astrophysics Data System (ADS)

    Wang, Hui-Fung Francis

    Since the beginning of the 1990s, the electricity business is transforming from a vertical integrating business to a competitive market operations. The generation, transmission, distribution subsystem of an electricity utility are operated independently as Genco (generation subsystem), Transco (transmission subsystem), and Distco (distribution subsystem). This trend promotes more economical inter- and intra regional transactions to be made by the participating companies and the users of electricity to achieve the intended objectives of deregulation. There are various types of electricity markets that are implemented in the North America in the past few years. However, transmission congestion management becomes a key issue in the electricity market design as more bilateral transactions are traded across long distances competing for scarce transmission resources. It directly alters the traditional concept of energy pricing and impacts the bottom line, revenue and cost of electricity, of both suppliers and buyers. In this research, transmission congestion problem in a deregulated market environment is elucidated by implementing by the Locational Marginal Pricing (LMP) method. With a comprehensive understanding of the LMP method, new mathematical tools will aid electric utilities in exploring new business opportunities are developed and presented in this dissertation. The dissertation focuses on the development of concept of (LMP) forecasting and its implication to the market participants in deregulated market. Specifically, we explore methods of developing fast LMP calculation techniques that are differ from existing LMPs. We also explore and document the usefulness of the proposed LMP in determining electricity pricing of a large scale power system. The developed mathematical tools use of well-known optimization techniques such as linear programming that are support by several flow charts. The fast and practical security constrained unit commitment methods are the integral parts of the LMP algorithms. Different components of optimization techniques, unit commitment, power flow analysis, and matrix manipulations for large scale power systems are integrated and represented by several new flow charts. The LMP concept and processes, mathematical models, and their corresponding algorithms has been implemented to study a small six bus test power system/market and also the real size New York power system/market where the transmission congestion is high and electricity market is deregulated. The simulated results documented in the dissertation are satisfactory and produce very encouraging result when compared to the actual Located Based Marginal Price (LMP) results posted by the New York Independent System Operator (ISO). The further research opportunities inspired by this dissertation are also elaborated.

  17. Optimizing lighting, thermal performance, and energy production of building facades by using automated blinds and PV cells

    NASA Astrophysics Data System (ADS)

    Alzoubi, Hussain Hendi

    Energy consumption in buildings has recently become a major concern for environmental designers. Within this field, daylighting and solar energy design are attractive strategies for saving energy. This study seeks the integrity and the optimality of building envelopes' performance. It focuses on the transparent parts of building facades, specifically, the windows and their shading devices. It suggests a new automated method of utilizing solar energy while keeping optimal solutions for indoor daylighting. The method utilizes a statistical approach to produce mathematical equations based on physical experimentation. A full-scale mock-up representing an actual office was built. Heat gain and lighting levels were measured empirically and correlated with blind angles. Computational methods were used to estimate the power production from photovoltaic cells. Mathematical formulas were derived from the results of the experiments; these formulas were utilized to construct curves as well as mathematical equations for the purpose of optimization. The mathematical equations resulting from the optimization process were coded using Java programming language to enable future users to deal with generic locations of buildings with a broader context of various climatic conditions. For the purpose of optimization by automation under different climatic conditions, a blind control system was developed based on the findings of this study. This system calibrates the blind angles instantaneously based upon the sun position, the indoor daylight, and the power production from the photovoltaic cells. The functions of this system guarantee full control of the projected solar energy on buildings' facades for indoor lighting and heat gain. In winter, the system automatically blows heat into the space, whereas it expels heat from the space during the summer season. The study showed that the optimality of building facades' performance is achievable for integrated thermal, energy, and lighting models in buildings. There are blind angles that produce maximum energy from the photovoltaic cells while keeping indoor light within the acceptable limits that prevent undesired heat gain in summer.

  18. WE-EF-207-01: FEATURED PRESENTATION and BEST IN PHYSICS (IMAGING): Task-Driven Imaging for Cone-Beam CT in Interventional Guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gang, G; Stayman, J; Ouadah, S

    2015-06-15

    Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and amore » wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within a patient-specific anatomical model to optimize image acquisition and reconstruction techniques, thereby improving imaging performance beyond that achievable with conventional approaches. 2R01-CA-112163; R01-EB-017226; U01-EB-018758; Siemens Healthcare (Forcheim, Germany)« less

  19. Modern meta-heuristics based on nonlinear physics processes: A review of models and design procedures

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.

    2016-10-01

    Meta-heuristic algorithms are problem-solving methods which try to find good-enough solutions to very hard optimization problems, at a reasonable computation time, where classical approaches fail, or cannot even been applied. Many existing meta-heuristics approaches are nature-inspired techniques, which work by simulating or modeling different natural processes in a computer. Historically, many of the most successful meta-heuristic approaches have had a biological inspiration, such as evolutionary computation or swarm intelligence paradigms, but in the last few years new approaches based on nonlinear physics processes modeling have been proposed and applied with success. Non-linear physics processes, modeled as optimization algorithms, are able to produce completely new search procedures, with extremely effective exploration capabilities in many cases, which are able to outperform existing optimization approaches. In this paper we review the most important optimization algorithms based on nonlinear physics, how they have been constructed from specific modeling of a real phenomena, and also their novelty in terms of comparison with alternative existing algorithms for optimization. We first review important concepts on optimization problems, search spaces and problems' difficulty. Then, the usefulness of heuristics and meta-heuristics approaches to face hard optimization problems is introduced, and some of the main existing classical versions of these algorithms are reviewed. The mathematical framework of different nonlinear physics processes is then introduced as a preparatory step to review in detail the most important meta-heuristics based on them. A discussion on the novelty of these approaches, their main computational implementation and design issues, and the evaluation of a novel meta-heuristic based on Strange Attractors mutation will be carried out to complete the review of these techniques. We also describe some of the most important application areas, in broad sense, of meta-heuristics, and describe free-accessible software frameworks which can be used to make easier the implementation of these algorithms.

  20. Concentrating phenolic acids from Lonicera japonica by nanofiltration technology

    NASA Astrophysics Data System (ADS)

    Li, Cunyu; Ma, Yun; Li, Hongyang; Peng, Guoping

    2017-03-01

    Response surface analysis methodology was used to optimize the concentrate process of phenolic acids from Lonicera japonica by nanofiltration technique. On the basis of the influences of pressure, temperature and circulating volume, the retention rate of neochlorogenic acid, chlorogenic acid and 4-dicaffeoylquinic acid were selected as index, molecular weight cut-off of nanofiltration membrane, concentration and pH were selected as influencing factors during concentrate process. The experiment mathematical model was arranged according to Box-Behnken central composite experiment design. The optimal concentrate conditions were as following: nanofiltration molecular weight cut-off, 150 Da; solutes concentration, 18.34 µg/mL; pH, 4.26. The predicted value of retention rate was 97.99% under the optimum conditions, and the experimental value was 98.03±0.24%, which was in accordance with the predicted value. These results demonstrate that the combination of Box-Behnken design and response surface analysis can well optimize the concentrate process of Lonicera japonica water-extraction by nanofiltration, and the results provide the basis for nanofiltration concentrate for heat-sensitive traditional Chinese medicine.

  1. MONSS: A multi-objective nonlinear simplex search approach

    NASA Astrophysics Data System (ADS)

    Zapotecas-Martínez, Saúl; Coello Coello, Carlos A.

    2016-01-01

    This article presents a novel methodology for dealing with continuous box-constrained multi-objective optimization problems (MOPs). The proposed algorithm adopts a nonlinear simplex search scheme in order to obtain multiple elements of the Pareto optimal set. The search is directed by a well-distributed set of weight vectors, each of which defines a scalarization problem that is solved by deforming a simplex according to the movements described by Nelder and Mead's method. Considering an MOP with n decision variables, the simplex is constructed using n+1 solutions which minimize different scalarization problems defined by n+1 neighbor weight vectors. All solutions found in the search are used to update a set of solutions considered to be the minima for each separate problem. In this way, the proposed algorithm collectively obtains multiple trade-offs among the different conflicting objectives, while maintaining a proper representation of the Pareto optimal front. In this article, it is shown that a well-designed strategy using just mathematical programming techniques can be competitive with respect to the state-of-the-art multi-objective evolutionary algorithms against which it was compared.

  2. Application of Semi Active Control Techniques to the Damping Suppression Problem of Solar Sail Booms

    NASA Technical Reports Server (NTRS)

    Adetona, O.; Keel, L. H.; Whorton, M. S.

    2007-01-01

    Solar sails provide a propellant free form for space propulsion. These are large flat surfaces that generate thrust when they are impacted by light. When attached to a space vehicle, the thrust generated can propel the space vehicle to great distances at significant speeds. For optimal performance the sail must be kept from excessive vibration. Active control techniques can provide the best performance. However, they require an external power-source that may create significant parasitic mass to the solar sail. However, solar sails require low mass for optimal performance. Secondly, active control techniques typically require a good system model to ensure stability and performance. However, the accuracy of solar sail models validated on earth for a space environment is questionable. An alternative approach is passive vibration techniques. These do not require an external power supply, and do not destabilize the system. A third alternative is referred to as semi-active control. This approach tries to get the best of both active and passive control, while avoiding their pitfalls. In semi-active control, an active control law is designed for the system, and passive control techniques are used to implement it. As a result, no external power supply is needed so the system is not destabilize-able. Though it typically underperforms active control techniques, it has been shown to out-perform passive control approaches and can be unobtrusively installed on a solar sail boom. Motivated by this, the objective of this research is to study the suitability of a Piezoelectric (PZT) patch actuator/sensor based semi-active control system for the vibration suppression problem of solar sail booms. Accordingly, we develop a suitable mathematical and computer model for such studies and demonstrate the capabilities of the proposed approach with computer simulations.

  3. Shape optimization and CAD

    NASA Technical Reports Server (NTRS)

    Rasmussen, John

    1990-01-01

    Structural optimization has attracted the attention since the days of Galileo. Olhoff and Taylor have produced an excellent overview of the classical research within this field. However, the interest in structural optimization has increased greatly during the last decade due to the advent of reliable general numerical analysis methods and the computer power necessary to use them efficiently. This has created the possibility of developing general numerical systems for shape optimization. Several authors, eg., Esping; Braibant & Fleury; Bennet & Botkin; Botkin, Yang, and Bennet; and Stanton have published practical and successful applications of general optimization systems. Ding and Homlein have produced extensive overviews of available systems. Furthermore, a number of commercial optimization systems based on well-established finite element codes have been introduced. Systems like ANSYS, IDEAS, OASIS, and NISAOPT are widely known examples. In parallel to this development, the technology of computer aided design (CAD) has gained a large influence on the design process of mechanical engineering. The CAD technology has already lived through a rapid development driven by the drastically growing capabilities of digital computers. However, the systems of today are still considered as being only the first generation of a long row of computer integrated manufacturing (CIM) systems. These systems to come will offer an integrated environment for design, analysis, and fabrication of products of almost any character. Thus, the CAD system could be regarded as simply a database for geometrical information equipped with a number of tools with the purpose of helping the user in the design process. Among these tools are facilities for structural analysis and optimization as well as present standard CAD features like drawing, modeling, and visualization tools. The state of the art of structural optimization is that a large amount of mathematical and mechanical techniques are available for the solution of single problems. By implementing collections of the available techniques into general software systems, operational environments for structural optimization have been created. The forthcoming years must bring solutions to the problem of integrating such systems into more general design environments. The result of this work should be CAD systems for rational design in which structural optimization is one important design tool among many others.

  4. Approximate approach for optimization space flights with a low thrust on the basis of sufficient optimality conditions

    NASA Astrophysics Data System (ADS)

    Salmin, Vadim V.

    2017-01-01

    Flight mechanics with a low-thrust is a new chapter of mechanics of space flight, considered plurality of all problems trajectory optimization and movement control laws and the design parameters of spacecraft. Thus tasks associated with taking into account the additional factors in mathematical models of the motion of spacecraft becomes increasingly important, as well as additional restrictions on the possibilities of the thrust vector control. The complication of the mathematical models of controlled motion leads to difficulties in solving optimization problems. Author proposed methods of finding approximate optimal control and evaluating their optimality based on analytical solutions. These methods are based on the principle of extending the class of admissible states and controls and sufficient conditions for the absolute minimum. Developed procedures of the estimation enabling to determine how close to the optimal founded solution, and indicate ways to improve them. Authors describes procedures of estimate for approximately optimal control laws for space flight mechanics problems, in particular for optimization flight low-thrust between the circular non-coplanar orbits, optimization the control angle and trajectory movement of the spacecraft during interorbital flights, optimization flights with low-thrust between arbitrary elliptical orbits Earth satellites.

  5. Mathematical Idea Analysis: What Embodied Cognitive Science Can Say about the Human Nature of Mathematics.

    ERIC Educational Resources Information Center

    Nunez, Rafael E.

    This paper gives a brief introduction to a discipline called the cognitive science of mathematics. The theoretical background of the arguments is based on embodied cognition and findings in cognitive linguistics. It discusses Mathematical Idea Analysis, a set of techniques for studying implicit structures in mathematics. Particular attention is…

  6. Autocollimation system for measuring angular deformations with reflector designed by quaternionic method

    NASA Astrophysics Data System (ADS)

    Hoang, Phong V.; Konyakhin, Igor A.

    2017-06-01

    Autocollimators are widely used for angular measurements in instrument-making and the manufacture of elements of optical systems (wedges, prisms, plane-parallel plates) to check their shape parameters (rectilinearity, parallelism and planarity) and retrieve their optical parameters (curvature radii, measure and test their flange focusing). Autocollimator efficiency is due to the high sensitivity of the autocollimation method to minor rotations of the reflecting control element or the controlled surface itself. We consider using quaternions to optimize reflector parameters during autocollimation measurements as compared to the matrix technique. Mathematical model studies have demonstrated that the orthogonal positioning of the two basic unchanged directions of the tetrahedral reflector of the autocollimator is optimal by the criterion of reducing measurement errors where the axis of actual rotation is in a bisecting position towards them. Computer results are presented of running quaternion models that yielded conditions for diminishing measurement errors provided apriori information is available on the position of rotation axis. A practical technique is considered for synthesizing the parameters of the tetrahedral reflector that employs the newly-retrieved relationships. Following the relationships found between the angles of the tetrahedral reflector and the angles of the parameters of its initial orientation, an applied technique was developed to synthesize the control element for autocollimation measurements in case apriori information is available on the axis of actual rotation during monitoring measurements of shaft or pipeline deformation.

  7. Quadratic Optimization in the Problems of Active Control of Sound

    NASA Technical Reports Server (NTRS)

    Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).

  8. Multidisciplinary design optimization - An emerging new engineering discipline

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1993-01-01

    A definition of the multidisciplinary design optimization (MDO) is introduced, and functionality and relationship of the MDO conceptual components are examined. The latter include design-oriented analysis, approximation concepts, mathematical system modeling, design space search, an optimization procedure, and a humane interface.

  9. Mathematical modelling in developmental biology.

    PubMed

    Vasieva, Olga; Rasolonjanahary, Manan'Iarivo; Vasiev, Bakhtier

    2013-06-01

    In recent decades, molecular and cellular biology has benefited from numerous fascinating developments in experimental technique, generating an overwhelming amount of data on various biological objects and processes. This, in turn, has led biologists to look for appropriate tools to facilitate systematic analysis of data. Thus, the need for mathematical techniques, which can be used to aid the classification and understanding of this ever-growing body of experimental data, is more profound now than ever before. Mathematical modelling is becoming increasingly integrated into biological studies in general and into developmental biology particularly. This review outlines some achievements of mathematics as applied to developmental biology and demonstrates the mathematical formulation of basic principles driving morphogenesis. We begin by describing a mathematical formalism used to analyse the formation and scaling of morphogen gradients. Then we address a problem of interplay between the dynamics of morphogen gradients and movement of cells, referring to mathematical models of gastrulation in the chick embryo. In the last section, we give an overview of various mathematical models used in the study of the developmental cycle of Dictyostelium discoideum, which is probably the best example of successful mathematical modelling in developmental biology.

  10. A Generalization of the Karush-Kuhn-Tucker Theorem for Approximate Solutions of Mathematical Programming Problems Based on Quadratic Approximation

    NASA Astrophysics Data System (ADS)

    Voloshinov, V. V.

    2018-03-01

    In computations related to mathematical programming problems, one often has to consider approximate, rather than exact, solutions satisfying the constraints of the problem and the optimality criterion with a certain error. For determining stopping rules for iterative procedures, in the stability analysis of solutions with respect to errors in the initial data, etc., a justified characteristic of such solutions that is independent of the numerical method used to obtain them is needed. A necessary δ-optimality condition in the smooth mathematical programming problem that generalizes the Karush-Kuhn-Tucker theorem for the case of approximate solutions is obtained. The Lagrange multipliers corresponding to the approximate solution are determined by solving an approximating quadratic programming problem.

  11. Mathematical models used in segmentation and fractal methods of 2-D ultrasound images

    NASA Astrophysics Data System (ADS)

    Moldovanu, Simona; Moraru, Luminita; Bibicu, Dorin

    2012-11-01

    Mathematical models are widely used in biomedical computing. The extracted data from images using the mathematical techniques are the "pillar" achieving scientific progress in experimental, clinical, biomedical, and behavioural researches. This article deals with the representation of 2-D images and highlights the mathematical support for the segmentation operation and fractal analysis in ultrasound images. A large number of mathematical techniques are suitable to be applied during the image processing stage. The addressed topics cover the edge-based segmentation, more precisely the gradient-based edge detection and active contour model, and the region-based segmentation namely Otsu method. Another interesting mathematical approach consists of analyzing the images using the Box Counting Method (BCM) to compute the fractal dimension. The results of the paper provide explicit samples performed by various combination of methods.

  12. Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem

    PubMed Central

    Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi

    2013-01-01

    Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429

  13. Multi-Objective Ant Colony Optimization Based on the Physarum-Inspired Mathematical Model for Bi-Objective Traveling Salesman Problems

    PubMed Central

    Zhang, Zili; Gao, Chao; Lu, Yuxiao; Liu, Yuxin; Liang, Mingxin

    2016-01-01

    Bi-objective Traveling Salesman Problem (bTSP) is an important field in the operations research, its solutions can be widely applied in the real world. Many researches of Multi-objective Ant Colony Optimization (MOACOs) have been proposed to solve bTSPs. However, most of MOACOs suffer premature convergence. This paper proposes an optimization strategy for MOACOs by optimizing the initialization of pheromone matrix with the prior knowledge of Physarum-inspired Mathematical Model (PMM). PMM can find the shortest route between two nodes based on the positive feedback mechanism. The optimized algorithms, named as iPM-MOACOs, can enhance the pheromone in the short paths and promote the search ability of ants. A series of experiments are conducted and experimental results show that the proposed strategy can achieve a better compromise solution than the original MOACOs for solving bTSPs. PMID:26751562

  14. Multi-Objective Ant Colony Optimization Based on the Physarum-Inspired Mathematical Model for Bi-Objective Traveling Salesman Problems.

    PubMed

    Zhang, Zili; Gao, Chao; Lu, Yuxiao; Liu, Yuxin; Liang, Mingxin

    2016-01-01

    Bi-objective Traveling Salesman Problem (bTSP) is an important field in the operations research, its solutions can be widely applied in the real world. Many researches of Multi-objective Ant Colony Optimization (MOACOs) have been proposed to solve bTSPs. However, most of MOACOs suffer premature convergence. This paper proposes an optimization strategy for MOACOs by optimizing the initialization of pheromone matrix with the prior knowledge of Physarum-inspired Mathematical Model (PMM). PMM can find the shortest route between two nodes based on the positive feedback mechanism. The optimized algorithms, named as iPM-MOACOs, can enhance the pheromone in the short paths and promote the search ability of ants. A series of experiments are conducted and experimental results show that the proposed strategy can achieve a better compromise solution than the original MOACOs for solving bTSPs.

  15. Optimal control of raw timber production processes

    Treesearch

    Ivan Kolenka

    1978-01-01

    This paper demonstrates the possibility of optimal planning and control of timber harvesting activ-ities with mathematical optimization models. The separate phases of timber harvesting are represented by coordinated models which can be used to select the optimal decision for the execution of any given phase. The models form a system whose components are connected and...

  16. Inverse transport calculations in optical imaging with subspace optimization algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu

    2014-09-15

    Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less

  17. Uncertainty management by relaxation of conflicting constraints in production process scheduling

    NASA Technical Reports Server (NTRS)

    Dorn, Juergen; Slany, Wolfgang; Stary, Christian

    1992-01-01

    Mathematical-analytical methods as used in Operations Research approaches are often insufficient for scheduling problems. This is due to three reasons: the combinatorial complexity of the search space, conflicting objectives for production optimization, and the uncertainty in the production process. Knowledge-based techniques, especially approximate reasoning and constraint relaxation, are promising ways to overcome these problems. A case study from an industrial CIM environment, namely high-grade steel production, is presented to demonstrate how knowledge-based scheduling with the desired capabilities could work. By using fuzzy set theory, the applied knowledge representation technique covers the uncertainty inherent in the problem domain. Based on this knowledge representation, a classification of jobs according to their importance is defined which is then used for the straightforward generation of a schedule. A control strategy which comprises organizational, spatial, temporal, and chemical constraints is introduced. The strategy supports the dynamic relaxation of conflicting constraints in order to improve tentative schedules.

  18. Biofouling in reverse osmosis: phenomena, monitoring, controlling and remediation

    NASA Astrophysics Data System (ADS)

    Maddah, Hisham; Chogle, Aman

    2017-10-01

    This paper is a comprehensive review of biofouling in reverse osmosis modules where we have discussed the mechanism of biofouling. Water crisis is an issue of pandemic concern because of the steady rise in demand of drinking water. Overcoming biofouling is vital since we need to optimize expenses and quality of potable water production. Various kinds of microorganisms responsible for biofouling have been identified to develop better understanding of their attacking behavior enabling us to encounter the problem. Both primitive and advanced detection techniques have been studied for the monitoring of biofilm development on reverse osmosis membranes. Biofouling has a negative impact on membrane life as well as permeate flux and quality. Thus, a mathematical model has been presented for the calculation of normalized permeate flux for evaluating the extent of biofouling. It is concluded that biofouling can be controlled by the application of several physical and chemical remediation techniques.

  19. Barrier-breaking performance for industrial problems on the CRAY C916

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graffunder, S.K.

    1993-12-31

    Nine applications, including third-party codes, were submitted to the Gordon Bell Prize committee showing the CRAY C916 supercomputer providing record-breaking time to solution for industrial problems in several disciplines. Performance was obtained by balancing raw hardware speed; effective use of large, real, shared memory; compiler vectorization and autotasking; hand optimization; asynchronous I/O techniques; and new algorithms. The highest GFLOPS performance for the submissions was 11.1 GFLOPS out of a peak advertised performance of 16 GFLOPS for the CRAY C916 system. One program achieved a 15.45 speedup from the compiler with just two hand-inserted directives to scope variables properly for themore » mathematical library. New I/O techniques hide tens of gigabytes of I/O behind parallel computations. Finally, new iterative solver algorithms have demonstrated times to solution on 1 CPU as high as 70 times faster than the best direct solvers.« less

  20. Application of nomographs for analysis and prediction of receiver spurious response EMI

    NASA Astrophysics Data System (ADS)

    Heather, F. W.

    1985-07-01

    Spurious response EMI for the front end of a superheterodyne receiver follows a simple mathematic formula; however, the application of the formula to predict test frequencies produces more data than can be evaluated. An analysis technique has been developed to graphically depict all receiver spurious responses usig a nomograph and to permit selection of optimum test frequencies. The discussion includes the math model used to simulate a superheterodyne receiver, the implementation of the model in the computer program, the approach to test frequency selection, interpretation of the nomographs, analysis and prediction of receiver spurious response EMI from the nomographs, and application of the nomographs. In addition, figures are provided of sample applications. This EMI analysis and prediction technique greatly improves the Electromagnetic Compatibility (EMC) test engineer's ability to visualize the scope of receiver spurious response EMI testing and optimize test frequency selection.

  1. Collaborative Emission Reduction Model Based on Multi-Objective Optimization for Greenhouse Gases and Air Pollutants.

    PubMed

    Meng, Qing-chun; Rong, Xiao-xia; Zhang, Yi-min; Wan, Xiao-le; Liu, Yuan-yuan; Wang, Yu-zhi

    2016-01-01

    CO2 emission influences not only global climate change but also international economic and political situations. Thus, reducing the emission of CO2, a major greenhouse gas, has become a major issue in China and around the world as regards preserving the environmental ecology. Energy consumption from coal, oil, and natural gas is primarily responsible for the production of greenhouse gases and air pollutants such as SO2 and NOX, which are the main air pollutants in China. In this study, a mathematical multi-objective optimization method was adopted to analyze the collaborative emission reduction of three kinds of gases on the basis of their common restraints in different ways of energy consumption to develop an economic, clean, and efficient scheme for energy distribution. The first part introduces the background research, the collaborative emission reduction for three kinds of gases, the multi-objective optimization, the main mathematical modeling, and the optimization method. The second part discusses the four mathematical tools utilized in this study, which include the Granger causality test to analyze the causality between air quality and pollutant emission, a function analysis to determine the quantitative relation between energy consumption and pollutant emission, a multi-objective optimization to set up the collaborative optimization model that considers energy consumption, and an optimality condition analysis for the multi-objective optimization model to design the optimal-pole algorithm and obtain an efficient collaborative reduction scheme. In the empirical analysis, the data of pollutant emission and final consumption of energies of Tianjin in 1996-2012 was employed to verify the effectiveness of the model and analyze the efficient solution and the corresponding dominant set. In the last part, several suggestions for collaborative reduction are recommended and the drawn conclusions are stated.

  2. Collaborative Emission Reduction Model Based on Multi-Objective Optimization for Greenhouse Gases and Air Pollutants

    PubMed Central

    Zhang, Yi-min; Wan, Xiao-le; Liu, Yuan-yuan; Wang, Yu-zhi

    2016-01-01

    CO2 emission influences not only global climate change but also international economic and political situations. Thus, reducing the emission of CO2, a major greenhouse gas, has become a major issue in China and around the world as regards preserving the environmental ecology. Energy consumption from coal, oil, and natural gas is primarily responsible for the production of greenhouse gases and air pollutants such as SO2 and NOX, which are the main air pollutants in China. In this study, a mathematical multi-objective optimization method was adopted to analyze the collaborative emission reduction of three kinds of gases on the basis of their common restraints in different ways of energy consumption to develop an economic, clean, and efficient scheme for energy distribution. The first part introduces the background research, the collaborative emission reduction for three kinds of gases, the multi-objective optimization, the main mathematical modeling, and the optimization method. The second part discusses the four mathematical tools utilized in this study, which include the Granger causality test to analyze the causality between air quality and pollutant emission, a function analysis to determine the quantitative relation between energy consumption and pollutant emission, a multi-objective optimization to set up the collaborative optimization model that considers energy consumption, and an optimality condition analysis for the multi-objective optimization model to design the optimal-pole algorithm and obtain an efficient collaborative reduction scheme. In the empirical analysis, the data of pollutant emission and final consumption of energies of Tianjin in 1996–2012 was employed to verify the effectiveness of the model and analyze the efficient solution and the corresponding dominant set. In the last part, several suggestions for collaborative reduction are recommended and the drawn conclusions are stated. PMID:27010658

  3. CSP - The 19th European Conference on Mathematics for Industry (ECMI 2016)

    DTIC Science & Technology

    2017-03-02

    Quality physics in game cinematics. Conclusions Most significant advance reported The ECMI 2016 exceeded by far the expectations of the Organizing... games . 15. SUBJECT TERMS Industrial mathematics; numerical simulation ; optimization; modelling; innovation. 16. SECURITY CLASSIFICATION OF: 17

  4. Using the Gurobi Solvers on the Peregrine System | High-Performance

    Science.gov Websites

    Peregrine System Gurobi Optimizer is a suite of solvers for mathematical programming. It is licensed for ('GRB_MATLAB_PATH') >> path(path,grb) Gurobi and GAMS GAMS is a high-level modeling system for mathematical

  5. Optimization of inclusive fitness.

    PubMed

    Grafen, Alan

    2006-02-07

    The first fully explicit argument is given that broadly supports a widespread belief among whole-organism biologists that natural selection tends to lead to organisms acting as if maximizing their inclusive fitness. The use of optimization programs permits a clear statement of what this belief should be understood to mean, in contradistinction to the common mathematical presumption that it should be formalized as some kind of Lyapunov or even potential function. The argument reveals new details and uncovers latent assumptions. A very general genetic architecture is allowed, and there is arbitrary uncertainty. However, frequency dependence of fitnesses is not permitted. The logic of inclusive fitness immediately draws together various kinds of intra-genomic conflict, and the concept of 'p-family' is introduced. Inclusive fitness is thus incorporated into the formal Darwinism project, which aims to link the mathematics of motion (difference and differential equations) used to describe gene frequency trajectories with the mathematics of optimization used to describe purpose and design. Important questions remain to be answered in the fundamental theory of inclusive fitness.

  6. Conceptual and Procedural Approaches to Mathematics in the Engineering Curriculum--Comparing Views of Junior and Senior Engineering Students in Two Countries

    ERIC Educational Resources Information Center

    Bergsten, Christer; Engelbrecht, Johann; Kågesten, Owe

    2017-01-01

    One challenge for an optimal design of the mathematical components in engineering education curricula is to understand how the procedural and conceptual dimensions of mathematical work can be matched with different demands and contexts from the education and practice of engineers. The focus in this paper is on how engineering students respond to…

  7. Gesellschaft fuer angewandte Mathematik und Mechanik, Scientific Annual Meeting, Universitaet Stuttgart, Federal Republic of Germany, Apr. 13-17, 1987, Reports

    NASA Astrophysics Data System (ADS)

    Recent advances in the analytical and numerical treatment of physical and engineering problems are discussed in reviews and reports. Topics addressed include fluid mechanics, numerical methods for differential equations, FEM approaches, and boundary-element methods. Consideration is given to optimization, decision theory, stochastics, actuarial mathematics, applied mathematics and mathematical physics, and numerical analysis.

  8. 39 CFR 3050.1 - Definitions applicable to this part.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., mathematical, or statistical theory, precept, or assumption applied by the Postal Service in producing a... manipulation technique whose validity does not require the acceptance of a particular economic, mathematical, or statistical theory, precept, or assumption. A change in quantification technique should not change...

  9. Experimental Mathematics and Computational Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David H.; Borwein, Jonathan M.

    2009-04-30

    The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.

  10. Automatic Semantic Generation and Arabic Translation of Mathematical Expressions on the Web

    ERIC Educational Resources Information Center

    Doush, Iyad Abu; Al-Bdarneh, Sondos

    2013-01-01

    Automatic processing of mathematical information on the web imposes some difficulties. This paper presents a novel technique for automatic generation of mathematical equations semantic and Arabic translation on the web. The proposed system facilitates unambiguous representation of mathematical equations by correlating equations to their known…

  11. Goddard trajectory determination subsystem: Mathematical specifications

    NASA Technical Reports Server (NTRS)

    Wagner, W. E. (Editor); Velez, C. E. (Editor)

    1972-01-01

    The mathematical specifications of the Goddard trajectory determination subsystem of the flight dynamics system are presented. These specifications include the mathematical description of the coordinate systems, dynamic and measurement model, numerical integration techniques, and statistical estimation concepts.

  12. DESIGN AND OPTIMIZATION OF A REFRIGERATION SYSTEM

    EPA Science Inventory

    The paper discusses the design and optimization of a refrigeration system, using a mathematical model of a refrigeration system modified to allow its use with the optimization program. he model was developed using only algebraic equations so that it could be used with the optimiz...

  13. A nonlinear bi-level programming approach for product portfolio management.

    PubMed

    Ma, Shuang

    2016-01-01

    Product portfolio management (PPM) is a critical decision-making for companies across various industries in today's competitive environment. Traditional studies on PPM problem have been motivated toward engineering feasibilities and marketing which relatively pay less attention to other competitors' actions and the competitive relations, especially in mathematical optimization domain. The key challenge lies in that how to construct a mathematical optimization model to describe this Stackelberg game-based leader-follower PPM problem and the competitive relations between them. The primary work of this paper is the representation of a decision framework and the optimization model to leverage the PPM problem of leader and follower. A nonlinear, integer bi-level programming model is developed based on the decision framework. Furthermore, a bi-level nested genetic algorithm is put forward to solve this nonlinear bi-level programming model for leader-follower PPM problem. A case study of notebook computer product portfolio optimization is reported. Results and analyses reveal that the leader-follower bi-level optimization model is robust and can empower product portfolio optimization.

  14. Quantitative structure-retention relationships applied to development of liquid chromatography gradient-elution method for the separation of sartans.

    PubMed

    Golubović, Jelena; Protić, Ana; Otašević, Biljana; Zečević, Mira

    2016-04-01

    QSRR are mathematically derived relationships between the chromatographic parameters determined for a representative series of analytes in given separation systems and the molecular descriptors accounting for the structural differences among the investigated analytes. Artificial neural network is a technique of data analysis, which sets out to emulate the human brain's way of working. The aim of the present work was to optimize separation of six angiotensin receptor antagonists, so-called sartans: losartan, valsartan, irbesartan, telmisartan, candesartan cilexetil and eprosartan in a gradient-elution HPLC method. For this purpose, ANN as a mathematical tool was used for establishing a QSRR model based on molecular descriptors of sartans and varied instrumental conditions. The optimized model can be further used for prediction of an external congener of sartans and analysis of the influence of the analyte structure, represented through molecular descriptors, on retention behaviour. Molecular descriptors included in modelling were electrostatic, geometrical and quantum-chemical descriptors: connolly solvent excluded volume non-1,4 van der Waals energy, octanol/water distribution coefficient, polarizability, number of proton-donor sites and number of proton-acceptor sites. Varied instrumental conditions were gradient time, buffer pH and buffer molarity. High prediction ability of the optimized network enabled complete separation of the analytes within the run time of 15.5 min under following conditions: gradient time of 12.5 min, buffer pH of 3.95 and buffer molarity of 25 mM. Applied methodology showed the potential to predict retention behaviour of an external analyte with the properties within the training space. Connolly solvent excluded volume, polarizability and number of proton-acceptor sites appeared to be most influential paramateres on retention behaviour of the sartans. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Research on Mathematical Techniques in Psychology. Final Report.

    ERIC Educational Resources Information Center

    Gulliksen, Harold

    Mathematical techniques are developed for studying psychological problems in three fields: (1) psychological scaling, (2) learning and concept formation, and (3) mental measurement. Psychological scaling procedures are demonstrated to be useful in many areas, ranging from sensory discrimination of physical stimuli, such as colors, sounds, etc.,…

  16. Variable-Complexity Multidisciplinary Optimization on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Grossman, Bernard; Mason, William H.; Watson, Layne T.; Haftka, Raphael T.

    1998-01-01

    This report covers work conducted under grant NAG1-1562 for the NASA High Performance Computing and Communications Program (HPCCP) from December 7, 1993, to December 31, 1997. The objective of the research was to develop new multidisciplinary design optimization (MDO) techniques which exploit parallel computing to reduce the computational burden of aircraft MDO. The design of the High-Speed Civil Transport (HSCT) air-craft was selected as a test case to demonstrate the utility of our MDO methods. The three major tasks of this research grant included: development of parallel multipoint approximation methods for the aerodynamic design of the HSCT, use of parallel multipoint approximation methods for structural optimization of the HSCT, mathematical and algorithmic development including support in the integration of parallel computation for items (1) and (2). These tasks have been accomplished with the development of a response surface methodology that incorporates multi-fidelity models. For the aerodynamic design we were able to optimize with up to 20 design variables using hundreds of expensive Euler analyses together with thousands of inexpensive linear theory simulations. We have thereby demonstrated the application of CFD to a large aerodynamic design problem. For the predicting structural weight we were able to combine hundreds of structural optimizations of refined finite element models with thousands of optimizations based on coarse models. Computations have been carried out on the Intel Paragon with up to 128 nodes. The parallel computation allowed us to perform combined aerodynamic-structural optimization using state of the art models of a complex aircraft configurations.

  17. An Eddy Current Testing Platform System for Pipe Defect Inspection Based on an Optimized Eddy Current Technique Probe Design

    PubMed Central

    Rifai, Damhuji; Abdalla, Ahmed N.; Razali, Ramdan; Ali, Kharudin; Faraj, Moneer A.

    2017-01-01

    The use of the eddy current technique (ECT) for the non-destructive testing of conducting materials has become increasingly important in the past few years. The use of the non-destructive ECT plays a key role in the ensuring the safety and integrity of the large industrial structures such as oil and gas pipelines. This paper introduce a novel ECT probe design integrated with the distributed ECT inspection system (DSECT) use for crack inspection on inner ferromagnetic pipes. The system consists of an array of giant magneto-resistive (GMR) sensors, a pneumatic system, a rotating magnetic field excitation source and a host PC acting as the data analysis center. Probe design parameters, namely probe diameter, an excitation coil and the number of GMR sensors in the array sensor is optimized using numerical optimization based on the desirability approach. The main benefits of DSECT can be seen in terms of its modularity and flexibility for the use of different types of magnetic transducers/sensors, and signals of a different nature with either digital or analog outputs, making it suited for the ECT probe design using an array of GMR magnetic sensors. A real-time application of the DSECT distributed system for ECT inspection can be exploited for the inspection of 70 mm carbon steel pipe. In order to predict the axial and circumference defect detection, a mathematical model is developed based on the technique known as response surface methodology (RSM). The inspection results of a carbon steel pipe sample with artificial defects indicate that the system design is highly efficient. PMID:28335399

  18. Stochastic Robust Mathematical Programming Model for Power System Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  19. CFD studies on biomass thermochemical conversion.

    PubMed

    Wang, Yiqun; Yan, Lifeng

    2008-06-01

    Thermochemical conversion of biomass offers an efficient and economically process to provide gaseous, liquid and solid fuels and prepare chemicals derived from biomass. Computational fluid dynamic (CFD) modeling applications on biomass thermochemical processes help to optimize the design and operation of thermochemical reactors. Recent progression in numerical techniques and computing efficacy has advanced CFD as a widely used approach to provide efficient design solutions in industry. This paper introduces the fundamentals involved in developing a CFD solution. Mathematical equations governing the fluid flow, heat and mass transfer and chemical reactions in thermochemical systems are described and sub-models for individual processes are presented. It provides a review of various applications of CFD in the biomass thermochemical process field.

  20. Shuttle cryogenic supply system. Optimization study. Volume 5 B-1: Programmers manual for math models

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A computer program for rapid parametric evaluation of various types of cryogenics spacecraft systems is presented. The mathematical techniques of the program provide the capability for in-depth analysis combined with rapid problem solution for the production of a large quantity of soundly based trade-study data. The program requires a large data bank capable of providing characteristics performance data for a wide variety of component assemblies used in cryogenic systems. The program data requirements are divided into: (1) the semipermanent data tables and source data for performance characteristics and (2) the variable input data which contains input parameters which may be perturbated for parametric system studies.

  1. Faithful test of nonlocal realism with entangled coherent states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Chang-Woo; Jeong, Hyunseok; Paternostro, Mauro

    2011-02-15

    We investigate the violation of Leggett's inequality for nonlocal realism using entangled coherent states and various types of local measurements. We prove mathematically the relation between the violation of the Clauser-Horne-Shimony-Holt form of Bell's inequality and Leggett's one when tested by the same resources. For Leggett inequalities, we generalize the nonlocal realistic bound to systems in Hilbert spaces larger than bidimensional ones and introduce an optimization technique that allows one to achieve larger degrees of violation by adjusting the local measurement settings. Our work describes the steps that should be performed to produce a self-consistent generalization of Leggett's original argumentsmore » to continuous-variable states.« less

  2. CFD Studies on Biomass Thermochemical Conversion

    PubMed Central

    Wang, Yiqun; Yan, Lifeng

    2008-01-01

    Thermochemical conversion of biomass offers an efficient and economically process to provide gaseous, liquid and solid fuels and prepare chemicals derived from biomass. Computational fluid dynamic (CFD) modeling applications on biomass thermochemical processes help to optimize the design and operation of thermochemical reactors. Recent progression in numerical techniques and computing efficacy has advanced CFD as a widely used approach to provide efficient design solutions in industry. This paper introduces the fundamentals involved in developing a CFD solution. Mathematical equations governing the fluid flow, heat and mass transfer and chemical reactions in thermochemical systems are described and sub-models for individual processes are presented. It provides a review of various applications of CFD in the biomass thermochemical process field. PMID:19325848

  3. Optimisation by hierarchical search

    NASA Astrophysics Data System (ADS)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  4. Preliminary design procedure for insulated structures subjected to transient heating

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.

    1979-01-01

    Minimum-mass designs were obtained for insulated structural panels loaded by a general set of inplane forces and a time dependent temperature. Temperature and stress histories in the structure are given by closed-form solutions, and optimization of the insulation and structural thicknesses is performed by nonlinear mathematical programming techniques. Design calculations are described to evaluate the structural efficiency of eight materials under combined heating and mechanical loads: graphite/polyimide, graphite/epoxy, boron/aluminum, titanium, aluminum, Rene 41, carbon/carbon, and Lockalloy. The effect on design mass of intensity and duration of heating were assessed. Results indicate that an optimum structure may have a temperature response well below the recommended allowable temperature for the material.

  5. Mathematical defense method of networked servers with controlled remote backups

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2006-05-01

    The networked server defense model is focused on reliability and availability in security respects. The (remote) backup servers are hooked up by VPN (Virtual Private Network) with high-speed optical network and replace broken main severs immediately. The networked server can be represent as "machines" and then the system deals with main unreliable, spare, and auxiliary spare machine. During vacation periods, when the system performs a mandatory routine maintenance, auxiliary machines are being used for back-ups; the information on the system is naturally delayed. Analog of the N-policy to restrict the usage of auxiliary machines to some reasonable quantity. The results are demonstrated in the network architecture by using the stochastic optimization techniques.

  6. Optimal control of HIV/AIDS dynamic: Education and treatment

    NASA Astrophysics Data System (ADS)

    Sule, Amiru; Abdullah, Farah Aini

    2014-07-01

    A mathematical model which describes the transmission dynamics of HIV/AIDS is developed. The optimal control representing education and treatment for this model is explored. The existence of optimal Control is established analytically by the use of optimal control theory. Numerical simulations suggest that education and treatment for the infected has a positive impact on HIV/AIDS control.

  7. Numerical Solution of the Electron Heat Transport Equation and Physics-Constrained Modeling of the Thermal Conductivity via Sequential Quadratic Programming Optimization in Nuclear Fusion Plasmas

    NASA Astrophysics Data System (ADS)

    Paloma, Cynthia S.

    The plasma electron temperature (Te) plays a critical role in a tokamak nu- clear fusion reactor since temperatures on the order of 108K are required to achieve fusion conditions. Many plasma properties in a tokamak nuclear fusion reactor are modeled by partial differential equations (PDE's) because they depend not only on time but also on space. In particular, the dynamics of the electron temperature is governed by a PDE referred to as the Electron Heat Transport Equation (EHTE). In this work, a numerical method is developed to solve the EHTE based on a custom finite-difference technique. The solution of the EHTE is compared to temperature profiles obtained by using TRANSP, a sophisticated plasma transport code, for specific discharges from the DIII-D tokamak, located at the DIII-D National Fusion Facility in San Diego, CA. The thermal conductivity (also called thermal diffusivity) of the electrons (Xe) is a plasma parameter that plays a critical role in the EHTE since it indicates how the electron temperature diffusion varies across the minor effective radius of the tokamak. TRANSP approximates Xe through a curve-fitting technique to match experimentally measured electron temperature profiles. While complex physics-based model have been proposed for Xe, there is a lack of a simple mathematical model for the thermal diffusivity that could be used for control design. In this work, a model for Xe is proposed based on a scaling law involving key plasma variables such as the electron temperature (Te), the electron density (ne), and the safety factor (q). An optimization algorithm is developed based on the Sequential Quadratic Programming (SQP) technique to optimize the scaling factors appearing in the proposed model so that the predicted electron temperature and magnetic flux profiles match predefined target profiles in the best possible way. A simulation study summarizing the outcomes of the optimization procedure is presented to illustrate the potential of the proposed modeling method.

  8. Mathematical Techniques for Nonlinear System Theory.

    DTIC Science & Technology

    1979-05-01

    7 7 AD—A078 715 FLORIDA UNIV GAINESVILLE CENTER FOR MATHEMATICAL SYS——ETC FIG 12/1 MATHEMATICAL TECHNIQUES FOR NONLINEAR SYSTEM THEORY . (U) MAY 79... System Theory / 61102F ~~~~~ ~ ~~~~~~~~ Gainesville , FL 32601 L ~~~ CONTROLLING OFFI C E NAME A N D ADDRES S . Air Force Office of Scientific... System Theory During the past year, the major effort under this grant was work by the Principal Investigator (R. E. Kalman) and by E. Emre

  9. Optimal policies of non-cross-resistant chemotherapy on Goldie and Coldman's cancer model.

    PubMed

    Chen, Jeng-Huei; Kuo, Ya-Hui; Luh, Hsing Paul

    2013-10-01

    Mathematical models can be used to study the chemotherapy on tumor cells. Especially, in 1979, Goldie and Coldman proposed the first mathematical model to relate the drug sensitivity of tumors to their mutation rates. Many scientists have since referred to this pioneering work because of its simplicity and elegance. Its original idea has also been extended and further investigated in massive follow-up studies of cancer modeling and optimal treatment. Goldie and Coldman, together with Guaduskas, later used their model to explain why an alternating non-cross-resistant chemotherapy is optimal with a simulation approach. Subsequently in 1983, Goldie and Coldman proposed an extended stochastic based model and provided a rigorous mathematical proof to their earlier simulation work when the extended model is approximated by its quasi-approximation. However, Goldie and Coldman's analytic study of optimal treatments majorly focused on a process with symmetrical parameter settings, and presented few theoretical results for asymmetrical settings. In this paper, we recast and restate Goldie, Coldman, and Guaduskas' model as a multi-stage optimization problem. Under an asymmetrical assumption, the conditions under which a treatment policy can be optimal are derived. The proposed framework enables us to consider some optimal policies on the model analytically. In addition, Goldie, Coldman and Guaduskas' work with symmetrical settings can be treated as a special case of our framework. Based on the derived conditions, this study provides an alternative proof to Goldie and Coldman's work. In addition to the theoretical derivation, numerical results are included to justify the correctness of our work. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. An Analysis of Final Course Grades in Two Different Entry Level Mathematics Courses between and among First Year College Students with Different Levels of High School Mathematics Preparation

    ERIC Educational Resources Information Center

    Muir, Carrie

    2012-01-01

    The purpose of this study was to compare the performance of first year college students with similar high school mathematics backgrounds in two introductory level college mathematics courses, "Fundamentals and Techniques of College Algebra and Quantitative Reasoning and Mathematical Skills," and to compare the performance of students…

  11. McDonald's vs Father Christmas

    ERIC Educational Resources Information Center

    Pratt, Dave; Simpson, Amanda

    2004-01-01

    Mathematics in textbooks and indeed in conventional classrooms is often presented as exercises or worksheets in which the mathematics itself has been processed into a form that is easily digested. This McDonald's version of mathematics ensures that the mathematical skill or technique is laid bare and typically the sole focus of attention. In this…

  12. Provocative Mathematics Questions: Drawing Attention to a Lack of Attention

    ERIC Educational Resources Information Center

    Klymchuk, Sergiy

    2015-01-01

    The article investigates the role of attention in the reflective thinking of school mathematics teachers. It analyses teachers' ability to pay attention to detail and "use" their mathematical knowledge. The vast majority of teachers can be expected to have an excellent knowledge of mathematical techniques. The question examined here is…

  13. Optimizing solar-cell grid geometry

    NASA Technical Reports Server (NTRS)

    Crossley, A. P.

    1969-01-01

    Trade-off analysis and mathematical expressions calculate optimum grid geometry in terms of various cell parameters. Determination of the grid geometry provides proper balance between grid resistance and cell output to optimize the energy conversion process.

  14. Preferences of Teaching Methods and Techniques in Mathematics with Reasons

    ERIC Educational Resources Information Center

    Ünal, Menderes

    2017-01-01

    In this descriptive study, the goal was to determine teachers' preferred pedagogical methods and techniques in mathematics. Qualitative research methods were employed, primarily case studies. 40 teachers were randomly chosen from various secondary schools in Kirsehir during the 2015-2016 educational terms, and data were gathered via…

  15. Mathematical Analysis for the Optimization of Wastewater Treatment Systems in Facultative Pond Indicator Organic Matter

    NASA Astrophysics Data System (ADS)

    Sunarsih; Widowati; Kartono; Sutrisno

    2018-02-01

    Stabilization ponds are easy to operate and their maintenance is simple. Treatment is carried out naturally and they are recommended in developing countries. The main disadvantage of these systems is large land area they occupy. The aim of this study was to perform an optimization of the wastewater treatment systems in a facultative pond, considering a mathematical analysis of the methodology to determine the model constrains organic matter. Matlab optimization toolbox was used for non linear programming. A facultative pond with the method was designed and then the optimization system was applied. The analyse meet the treated water quality requirements for the discharge to the water bodies. The results show a reduction of hydraulic retention time by 4.83 days, and the efficiency of of wastewater treatment of 84.16 percent.

  16. The Triangle Technique: a new evidence-based educational tool for pediatric medication calculations.

    PubMed

    Sredl, Darlene

    2006-01-01

    Many nursing student verbalize an aversion to mathematical concepts and experience math anxiety whenever a mathematical problem is confronted. Since nurses confront mathematical problems on a daily basis, they must learn to feel comfortable with their ability to perform these calculations correctly. The Triangle Technique, a new educational tool available to nurse educators, incorporates evidence-based concepts within a graphic model using visual, auditory, and kinesthetic learning styles to demonstrate pediatric medication calculations of normal therapeutic ranges. The theoretical framework for the technique is presented, as is a pilot study examining the efficacy of the educational tool. Statistically significant results obtained by Pearson's product-moment correlation indicate that students are better able to calculate accurate pediatric therapeutic dosage ranges after participation in the educational intervention of learning the Triangle Technique.

  17. Microwave-based medical diagnosis using particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Modiri, Arezoo

    This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level of complexity and randomness inherent to the selection of electromagnetic benchmark problems, a trend to resort to oversimplification in order to arrive at reasonable solutions has been taken in literature when utilizing analytical techniques. Here, an attempt has been made to avoid oversimplification when using the proposed swarm-based optimization algorithms.

  18. Selecting optimal structure of burners for tubular cylindrical furnaces by the mathematical experiment planning method

    NASA Astrophysics Data System (ADS)

    Katin, Viktor; Kosygin, Vladimir; Akhtiamov, Midkhat

    2017-10-01

    This paper substantiates the method of mathematical planning for experimental research in the process of selecting the most efficient types of burning devices for tubular refinery furnaces of vertical-cylindrical design. This paper provides detailed consideration of an experimental plan of a 4×4 Latin square type when studying the impact of three factors with four levels of variance. On the basis of the experimental research we have developed practical recommendations on the employment of optimal burners for two-step fuel combustion.

  19. An unconditionally stable method for numerically solving solar sail spacecraft equations of motion

    NASA Astrophysics Data System (ADS)

    Karwas, Alex

    Solar sails use the endless supply of the Sun's radiation to propel spacecraft through space. The sails use the momentum transfer from the impinging solar radiation to provide thrust to the spacecraft while expending zero fuel. Recently, the first solar sail spacecraft, or sailcraft, named IKAROS completed a successful mission to Venus and proved the concept of solar sail propulsion. Sailcraft experimental data is difficult to gather due to the large expenses of space travel, therefore, a reliable and accurate computational method is needed to make the process more efficient. Presented in this document is a new approach to simulating solar sail spacecraft trajectories. The new method provides unconditionally stable numerical solutions for trajectory propagation and includes an improved physical description over other methods. The unconditional stability of the new method means that a unique numerical solution is always determined. The improved physical description of the trajectory provides a numerical solution and time derivatives that are continuous throughout the entire trajectory. The error of the continuous numerical solution is also known for the entire trajectory. Optimal control for maximizing thrust is also provided within the framework of the new method. Verification of the new approach is presented through a mathematical description and through numerical simulations. The mathematical description provides details of the sailcraft equations of motion, the numerical method used to solve the equations, and the formulation for implementing the equations of motion into the numerical solver. Previous work in the field is summarized to show that the new approach can act as a replacement to previous trajectory propagation methods. A code was developed to perform the simulations and it is also described in this document. Results of the simulations are compared to the flight data from the IKAROS mission. Comparison of the two sets of data show that the new approach is capable of accurately simulating sailcraft motion. Sailcraft and spacecraft simulations are compared to flight data and to other numerical solution techniques. The new formulation shows an increase in accuracy over a widely used trajectory propagation technique. Simulations for two-dimensional, three-dimensional, and variable attitude trajectories are presented to show the multiple capabilities of the new technique. An element of optimal control is also part of the new technique. An additional equation is added to the sailcraft equations of motion that maximizes thrust in a specific direction. A technical description and results of an example optimization problem are presented. The spacecraft attitude dynamics equations take the simulation a step further by providing control torques using the angular rate and acceleration outputs of the numerical formulation.

  20. Parallel tiled Nussinov RNA folding loop nest generated using both dependence graph transitive closure and loop skewing.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2017-06-02

    RNA secondary structure prediction is a compute intensive task that lies at the core of several search algorithms in bioinformatics. Fortunately, the RNA folding approaches, such as the Nussinov base pair maximization, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. Polyhedral compilation techniques have proven to be a powerful tool for optimization of dense array codes. However, classical affine loop nest transformations used with these techniques do not optimize effectively codes of dynamic programming of RNA structure predictions. The purpose of this paper is to present a novel approach allowing for generation of a parallel tiled Nussinov RNA loop nest exposing significantly higher performance than that of known related code. This effect is achieved due to improving code locality and calculation parallelization. In order to improve code locality, we apply our previously published technique of automatic loop nest tiling to all the three loops of the Nussinov loop nest. This approach first forms original rectangular 3D tiles and then corrects them to establish their validity by means of applying the transitive closure of a dependence graph. To produce parallel code, we apply the loop skewing technique to a tiled Nussinov loop nest. The technique is implemented as a part of the publicly available polyhedral source-to-source TRACO compiler. Generated code was run on modern Intel multi-core processors and coprocessors. We present the speed-up factor of generated Nussinov RNA parallel code and demonstrate that it is considerably faster than related codes in which only the two outer loops of the Nussinov loop nest are tiled.

  1. A Mathematical Optimization Problem in Bioinformatics

    ERIC Educational Resources Information Center

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  2. Applications of numerical methods to simulate the movement of contaminants in groundwater.

    PubMed Central

    Sun, N Z

    1989-01-01

    This paper reviews mathematical models and numerical methods that have been extensively used to simulate the movement of contaminants through the subsurface. The major emphasis is placed on the numerical methods of advection-dominated transport problems and inverse problems. Several mathematical models that are commonly used in field problems are listed. A variety of numerical solutions for three-dimensional models are introduced, including the multiple cell balance method that can be considered a variation of the finite element method. The multiple cell balance method is easy to understand and convenient for solving field problems. When the advection transport dominates the dispersion transport, two kinds of numerical difficulties, overshoot and numerical dispersion, are always involved in solving standard, finite difference methods and finite element methods. To overcome these numerical difficulties, various numerical techniques are developed, such as upstream weighting methods and moving point methods. A complete review of these methods is given and we also mention the problems of parameter identification, reliability analysis, and optimal-experiment design that are absolutely necessary for constructing a practical model. PMID:2695327

  3. A Mathematical Relationship for Hydromorphone Loading into Liposomes with Trans-Membrane Ammonium Sulfate Gradients

    PubMed Central

    TU, SHENG; MCGINNIS, TAMARA; KRUGNER-HIGBY, LISA; HEATH, TIMOTHY D.

    2014-01-01

    We have studied the loading of the opioid hydromorphone into liposomes using ammonium sulfate gradients. Unlike other drugs loaded with this technique, hydromorphone is freely soluble as the sulfate salt, and, consequently, does not precipitate in the liposomes after loading. We have derived a mathematical relationship that can predict the extent of loading based on the ammonium ion content of the liposomes and the amount of drug added for loading. We have adapted and used the Berthelot indophenol assay to measure the amount of ammonium ions in the liposomes. Plots of the inverse of the fraction of hydromorphone loaded versus the amount of hydromorphone added are linear, and the slope should be the inverse of the amount of ammonium ions present in the liposomes. The inverse of the slopes obtained closely correspond to the amount of ammonium ions in the liposomes measured with the Berthelot indophenol assay. We also show that loading can be less than optimal under conditions where osmotically driven loss of ammonium ions or leakage of drug after loading may occur. PMID:20014429

  4. A mathematical relationship for hydromorphone loading into liposomes with trans-membrane ammonium sulfate gradients.

    PubMed

    Tu, Sheng; McGinnis, Tamara; Krugner-Higby, Lisa; Heath, Timothy D

    2010-06-01

    We have studied the loading of the opioid hydromorphone into liposomes using ammonium sulfate gradients. Unlike other drugs loaded with this technique, hydromorphone is freely soluble as the sulfate salt, and, consequently, does not precipitate in the liposomes after loading. We have derived a mathematical relationship that can predict the extent of loading based on the ammonium ion content of the liposomes and the amount of drug added for loading. We have adapted and used the Berthelot indophenol assay to measure the amount of ammonium ions in the liposomes. Plots of the inverse of the fraction of hydromorphone loaded versus the amount of hydromorphone added are linear, and the slope should be the inverse of the amount of ammonium ions present in the liposomes. The inverse of the slopes obtained closely correspond to the amount of ammonium ions in the liposomes measured with the Berthelot indophenol assay. We also show that loading can be less than optimal under conditions where osmotically driven loss of ammonium ions or leakage of drug after loading may occur. (c) 2009 Wiley-Liss, Inc. and the American Pharmacists Association

  5. Water supply management using an extended group fuzzy decision-making method: a case study in north-eastern Iran

    NASA Astrophysics Data System (ADS)

    Minatour, Yasser; Bonakdari, Hossein; Zarghami, Mahdi; Bakhshi, Maryam Ali

    2015-09-01

    The purpose of this study was to develop a group fuzzy multi-criteria decision-making method to be applied in rating problems associated with water resources management. Thus, here Chen's group fuzzy TOPSIS method extended by a difference technique to handle uncertainties of applying a group decision making. Then, the extended group fuzzy TOPSIS method combined with a consistency check. In the presented method, initially linguistic judgments are being surveyed via a consistency checking process, and afterward these judgments are being used in the extended Chen's fuzzy TOPSIS method. Here, each expert's opinion is turned to accurate mathematical numbers and, then, to apply uncertainties, the opinions of group are turned to fuzzy numbers using three mathematical operators. The proposed method is applied to select the optimal strategy for the rural water supply of Nohoor village in north-eastern Iran, as a case study and illustrated example. Sensitivity analyses test over results and comparing results with project reality showed that proposed method offered good results for water resources projects.

  6. Goldmann tonometer error correcting prism: clinical evaluation.

    PubMed

    McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko T; Schwiegerling, Jim; Levine, Jason; Kew, Corin

    2017-01-01

    Clinically evaluate a modified applanating surface Goldmann tonometer prism designed to substantially negate errors due to patient variability in biomechanics. A modified Goldmann prism with a correcting applanation tonometry surface (CATS) was mathematically optimized to minimize the intraocular pressure (IOP) measurement error due to patient variability in corneal thickness, stiffness, curvature, and tear film adhesion force. A comparative clinical study of 109 eyes measured IOP with CATS and Goldmann prisms. The IOP measurement differences between the CATS and Goldmann prisms were correlated to corneal thickness, hysteresis, and curvature. The CATS tonometer prism in correcting for Goldmann central corneal thickness (CCT) error demonstrated a reduction to <±2 mmHg in 97% of a standard CCT population. This compares to only 54% with CCT error <±2 mmHg using the Goldmann prism. Equal reductions of ~50% in errors due to corneal rigidity and curvature were also demonstrated. The results validate the CATS prism's improved accuracy and expected reduced sensitivity to Goldmann errors without IOP bias as predicted by mathematical modeling. The CATS replacement for the Goldmann prism does not change Goldmann measurement technique or interpretation.

  7. Final Technical Report: Mathematical Foundations for Uncertainty Quantification in Materials Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plechac, Petr; Vlachos, Dionisios G.

    We developed path-wise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of non-equilibrium extended molecular systems. The combination of these novel methodologies provided the first methods in the literature which are capable to handle UQ questions for stochastic complex systems with some or all of the following features: (a) multi-scale stochastic models such as (bio)chemical reaction networks, with a very large number of parameters, (b) spatially distributed systems such as Kinetic Monte Carlo or Langevin Dynamics, (c) non-equilibrium processes typically associated with coupled physico-chemical mechanisms, driven boundary conditions, hybrid micro-macro systems,more » etc. A particular computational challenge arises in simulations of multi-scale reaction networks and molecular systems. Mathematical techniques were applied to in silico prediction of novel materials with emphasis on the effect of microstructure on model uncertainty quantification (UQ). We outline acceleration methods to make calculations of real chemistry feasible followed by two complementary tasks on structure optimization and microstructure-induced UQ.« less

  8. Avian Influenza spread and transmission dynamics

    USGS Publications Warehouse

    Bourouiba, Lydia; Gourley, Stephen A.; Liu, Rongsong; Takekawa, John Y.; Wu, Jianhong; Chen, Dongmei; Moulin, Bernard; Wu, Jianhong

    2015-01-01

    The spread of highly pathogenic avian influenza (HPAI) viruses of type A of subtype H5N1 has been a serious threat to global public health. Understanding the roles of various (migratory, wild, poultry) bird species in the transmission of these viruses is critical for designing and implementing effective control and intervention measures. Developing appropriate models and mathematical techniques to understand these roles and to evaluate the effectiveness of mitigation strategies have been a challenge. Recent development of the global health surveillance (especially satellite tracking and GIS techniques) and the mathematical theory of dynamical systems combined have gradually shown the promise of some cutting-edge methodologies and techniques in mathematical biology to meet this challenge.

  9. Dynamic, stochastic models for congestion pricing and congestion securities.

    DOT National Transportation Integrated Search

    2010-12-01

    This research considers congestion pricing under demand uncertainty. In particular, a robust optimization (RO) approach is applied to optimal congestion pricing problems under user equilibrium. A mathematical model is developed and an analysis perfor...

  10. Implementation of numerical simulation techniques in analysis of the accidents in complex technological systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klishin, G.S.; Seleznev, V.E.; Aleoshin, V.V.

    1997-12-31

    Gas industry enterprises such as main pipelines, compressor gas transfer stations, gas extracting complexes belong to the energy intensive industry. Accidents there can result into the catastrophes and great social, environmental and economic losses. Annually, according to the official data several dozens of large accidents take place at the pipes in the USA and Russia. That is why prevention of the accidents, analysis of the mechanisms of their development and prediction of their possible consequences are acute and important tasks nowadays. The accidents reasons are usually of a complicated character and can be presented as a complex combination of natural,more » technical and human factors. Mathematical and computer simulations are safe, rather effective and comparatively inexpensive methods of the accident analysis. It makes it possible to analyze different mechanisms of a failure occurrence and development, to assess its consequences and give recommendations to prevent it. Besides investigation of the failure cases, numerical simulation techniques play an important role in the treatment of the diagnostics results of the objects and in further construction of mathematical prognostic simulations of the object behavior in the period of time between two inspections. While solving diagnostics tasks and in the analysis of the failure cases, the techniques of theoretical mechanics, of qualitative theory of different equations, of mechanics of a continuous medium, of chemical macro-kinetics and optimizing techniques are implemented in the Conversion Design Bureau {number_sign}5 (DB{number_sign}5). Both universal and special numerical techniques and software (SW) are being developed in DB{number_sign}5 for solution of such tasks. Almost all of them are calibrated on the calculations of the simulated and full-scale experiments performed at the VNIIEF and MINATOM testing sites. It is worth noting that in the long years of work there has been established a fruitful and effective collaboration of theoreticians, mathematicians and experimentalists of the institute to solve such tasks.« less

  11. Engaging Future Teachers in Problem-Based Learning with the Park City Mathematics Institute Problems

    ERIC Educational Resources Information Center

    Pilgrim, Mary E.

    2014-01-01

    Problem-based learning (PBL) is a pedagogical technique recommended for K-12 mathematics classrooms. However, the mathematics courses in future teachers' degree programs are often lecture based. Students typically learn about problem-based learning in theory, but rarely get to experience it first-hand in their mathematics courses. The premise…

  12. Optimization of barrel temperature and kidney bean flour percentage based on various physical properties of extruded snacks.

    PubMed

    Agathian, G; Semwal, A D; Sharma, G K

    2015-07-01

    The aim of the experiment was to optimize barrel temperature (122 to 178 ± 0.5 °C) and red kidney bean flour percentage (KBF) (12 to 68 ± 0.5 %) based on physical properties of extrudates like flash off percentage, water absorption index (WAI), water solubility index (WSI), bulk density (BD), radial expansion ratio (RER) and overall acceptability (OAA) using single screw extruder. The study was carried out by central composite rotatable design (CCRD) using Response surface methodology (RSM) and moisture content of feed was kept as constant 16.0 ± 0.5 % throughout experiments. Mathematical models for various responses were found to fit significantly (P < 0.05) for prediction. Optimization of experimental conditions was carried out using numerical optimization technique and the optimum barrel temperature and kidney bean flour percentage were 120 °C (T1) & 142.62 °C (T2 = T3) and 20 % respectively with desirability value of 0.909. Experiments were carried out using predicted values and verified using t-test and coefficient of variation percentage. Extruded snack prepared with rice flour (80 %) and kidney bean flour (20 %) at optimized conditions was accepted by the taste panellists and above 20 % KB incorporation was found to decrease overall acceptability score.

  13. Maximizing the Biochemical Resolving Power of Fluorescence Microscopy

    PubMed Central

    Esposito, Alessandro; Popleteeva, Marina; Venkitaraman, Ashok R.

    2013-01-01

    Most recent advances in fluorescence microscopy have focused on achieving spatial resolutions below the diffraction limit. However, the inherent capability of fluorescence microscopy to non-invasively resolve different biochemical or physical environments in biological samples has not yet been formally described, because an adequate and general theoretical framework is lacking. Here, we develop a mathematical characterization of the biochemical resolution in fluorescence detection with Fisher information analysis. To improve the precision and the resolution of quantitative imaging methods, we demonstrate strategies for the optimization of fluorescence lifetime, fluorescence anisotropy and hyperspectral detection, as well as different multi-dimensional techniques. We describe optimized imaging protocols, provide optimization algorithms and describe precision and resolving power in biochemical imaging thanks to the analysis of the general properties of Fisher information in fluorescence detection. These strategies enable the optimal use of the information content available within the limited photon-budget typically available in fluorescence microscopy. This theoretical foundation leads to a generalized strategy for the optimization of multi-dimensional optical detection, and demonstrates how the parallel detection of all properties of fluorescence can maximize the biochemical resolving power of fluorescence microscopy, an approach we term Hyper Dimensional Imaging Microscopy (HDIM). Our work provides a theoretical framework for the description of the biochemical resolution in fluorescence microscopy, irrespective of spatial resolution, and for the development of a new class of microscopes that exploit multi-parametric detection systems. PMID:24204821

  14. Optimal design of the first stage of the plate-fin heat exchanger for the EAST cryogenic system

    NASA Astrophysics Data System (ADS)

    Qingfeng, JIANG; Zhigang, ZHU; Qiyong, ZHANG; Ming, ZHUANG; Xiaofei, LU

    2018-03-01

    The size of the heat exchanger is an important factor determining the dimensions of the cold box in helium cryogenic systems. In this paper, a counter-flow multi-stream plate-fin heat exchanger is optimized by means of a spatial interpolation method coupled with a hybrid genetic algorithm. Compared with empirical correlations, this spatial interpolation algorithm based on a kriging model can be adopted to more precisely predict the Colburn heat transfer factors and Fanning friction factors of offset-strip fins. Moreover, strict computational fluid dynamics simulations can be carried out to predict the heat transfer and friction performance in the absence of reliable experimental data. Within the constraints of heat exchange requirements, maximum allowable pressure drop, existing manufacturing techniques and structural strength, a mathematical model of an optimized design with discrete and continuous variables based on a hybrid genetic algorithm is established in order to minimize the volume. The results show that for the first-stage heat exchanger in the EAST refrigerator, the structural size could be decreased from the original 2.200 × 0.600 × 0.627 (m3) to the optimized 1.854 × 0.420 × 0.340 (m3), with a large reduction in volume. The current work demonstrates that the proposed method could be a useful tool to achieve optimization in an actual engineering project during the practical design process.

  15. Techniques for designing rotorcraft control systems

    NASA Technical Reports Server (NTRS)

    Levine, William S.; Barlow, Jewel

    1993-01-01

    This report summarizes the work that was done on the project from 1 Apr. 1992 to 31 Mar. 1993. The main goal of this research is to develop a practical tool for rotorcraft control system design based on interactive optimization tools (CONSOL-OPTCAD) and classical rotorcraft design considerations (ADOCS). This approach enables the designer to combine engineering intuition and experience with parametric optimization. The combination should make it possible to produce a better design faster than would be possible using either pure optimization or pure intuition and experience. We emphasize that the goal of this project is not to develop an algorithm. It is to develop a tool. We want to keep the human designer in the design process to take advantage of his or her experience and creativity. The role of the computer is to perform the calculation necessary to improve and to display the performance of the nominal design. Briefly, during the first year we have connected CONSOL-OPTCAD, an existing software package for optimizing parameters with respect to multiple performance criteria, to a simplified nonlinear simulation of the UH-60 rotorcraft. We have also created mathematical approximations to the Mil-specs for rotorcraft handling qualities and input them into CONSOL-OPTCAD. Finally, we have developed the additional software necessary to use CONSOL-OPTCAD for the design of rotorcraft controllers.

  16. A continuous optimization approach for inferring parameters in mathematical models of regulatory networks.

    PubMed

    Deng, Zhimin; Tian, Tianhai

    2014-07-29

    The advances of systems biology have raised a large number of sophisticated mathematical models for describing the dynamic property of complex biological systems. One of the major steps in developing mathematical models is to estimate unknown parameters of the model based on experimentally measured quantities. However, experimental conditions limit the amount of data that is available for mathematical modelling. The number of unknown parameters in mathematical models may be larger than the number of observation data. The imbalance between the number of experimental data and number of unknown parameters makes reverse-engineering problems particularly challenging. To address the issue of inadequate experimental data, we propose a continuous optimization approach for making reliable inference of model parameters. This approach first uses a spline interpolation to generate continuous functions of system dynamics as well as the first and second order derivatives of continuous functions. The expanded dataset is the basis to infer unknown model parameters using various continuous optimization criteria, including the error of simulation only, error of both simulation and the first derivative, or error of simulation as well as the first and second derivatives. We use three case studies to demonstrate the accuracy and reliability of the proposed new approach. Compared with the corresponding discrete criteria using experimental data at the measurement time points only, numerical results of the ERK kinase activation module show that the continuous absolute-error criteria using both function and high order derivatives generate estimates with better accuracy. This result is also supported by the second and third case studies for the G1/S transition network and the MAP kinase pathway, respectively. This suggests that the continuous absolute-error criteria lead to more accurate estimates than the corresponding discrete criteria. We also study the robustness property of these three models to examine the reliability of estimates. Simulation results show that the models with estimated parameters using continuous fitness functions have better robustness properties than those using the corresponding discrete fitness functions. The inference studies and robustness analysis suggest that the proposed continuous optimization criteria are effective and robust for estimating unknown parameters in mathematical models.

  17. Computer Synthesis Approaches of Hyperboloid Gear Drives with Linear Contact

    NASA Astrophysics Data System (ADS)

    Abadjiev, Valentin; Kawasaki, Haruhisa

    2014-09-01

    The computer design has improved forming different type software for scientific researches in the field of gearing theory as well as performing an adequate scientific support of the gear drives manufacture. Here are attached computer programs that are based on mathematical models as a result of scientific researches. The modern gear transmissions require the construction of new mathematical approaches to their geometric, technological and strength analysis. The process of optimization, synthesis and design is based on adequate iteration procedures to find out an optimal solution by varying definite parameters. The study is dedicated to accepted methodology in the creation of soft- ware for the synthesis of a class high reduction hyperboloid gears - Spiroid and Helicon ones (Spiroid and Helicon are trademarks registered by the Illinois Tool Works, Chicago, Ill). The developed basic computer products belong to software, based on original mathematical models. They are based on the two mathematical models for the synthesis: "upon a pitch contact point" and "upon a mesh region". Computer programs are worked out on the basis of the described mathematical models, and the relations between them are shown. The application of the shown approaches to the synthesis of commented gear drives is illustrated.

  18. Mathematical modeling analysis of intratumoral disposition of anticancer agents and drug delivery systems.

    PubMed

    Popilski, Hen; Stepensky, David

    2015-05-01

    Solid tumors are characterized by complex morphology. Numerous factors relating to the composition of the cells and tumor stroma, vascularization and drainage of fluids affect the local microenvironment within a specific location inside the tumor. As a result, the intratumoral drug/drug delivery system (DDS) disposition following systemic or local administration is non-homogeneous and its complexity reflects the differences in the local microenvironment. Mathematical models can be used to analyze the intratumoral drug/DDS disposition and pharmacological effects and to assist in choice of optimal anticancer treatment strategies. The mathematical models that have been applied by different research groups to describe the intratumoral disposition of anticancer drugs/DDSs are summarized in this article. The properties of these models and of their suitability for prediction of the drug/DDS intratumoral disposition and pharmacological effects are reviewed. Currently available mathematical models appear to neglect some of the major factors that govern the drug/DDS intratumoral disposition, and apparently possess limited prediction capabilities. More sophisticated and detailed mathematical models and their extensive validation are needed for reliable prediction of different treatment scenarios and for optimization of drug treatment in the individual cancer patients.

  19. Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.

    2004-01-01

    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.

  20. An Enzymatic Clinical Chemistry Laboratory Experiment Incorporating an Introduction to Mathematical Method Comparison Techniques

    ERIC Educational Resources Information Center

    Duxbury, Mark

    2004-01-01

    An enzymatic laboratory experiment based on the analysis of serum is described that is suitable for students of clinical chemistry. The experiment incorporates an introduction to mathematical method-comparison techniques in which three different clinical glucose analysis methods are compared using linear regression and Bland-Altman difference…

  1. Linear regression analysis and its application to multivariate chromatographic calibration for the quantitative analysis of two-component mixtures.

    PubMed

    Dinç, Erdal; Ozdemir, Abdil

    2005-01-01

    Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.

  2. Experimental Design for Estimating Unknown Hydraulic Conductivity in a Confined Aquifer using a Genetic Algorithm and a Reduced Order Model

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Yeh, W.

    2013-12-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.

  3. The effect of total noise on two-dimension OCDMA codes

    NASA Astrophysics Data System (ADS)

    Dulaimi, Layth A. Khalil Al; Badlishah Ahmed, R.; Yaakob, Naimah; Aljunid, Syed A.; Matem, Rima

    2017-11-01

    In this research, we evaluate the performance of total noise effect on two dimension (2-D) optical code-division multiple access (OCDMA) performance systems using 2-D Modified Double Weight MDW under various link parameters. The impact of the multi-access interference (MAI) and other noise effect on the system performance. The 2-D MDW is compared mathematically with other codes which use similar techniques. We analyzed and optimized the data rate and effective receive power. The performance and optimization of MDW code in OCDMA system are reported, the bit error rate (BER) can be significantly improved when the 2-D MDW code desired parameters are selected especially the cross correlation properties. It reduces the MAI in the system compensate BER and phase-induced intensity noise (PIIN) in incoherent OCDMA The analysis permits a thorough understanding of PIIN, shot and thermal noises impact on 2-D MDW OCDMA system performance. PIIN is the main noise factor in the OCDMA network.

  4. ATTDES: An Expert System for Satellite Attitude Determination and Control. 2

    NASA Technical Reports Server (NTRS)

    Mackison, Donald L.; Gifford, Kevin

    1996-01-01

    The design, analysis, and flight operations of satellite attitude determintion and attitude control systems require extensive mathematical formulations, optimization studies, and computer simulation. This is best done by an analyst with extensive education and experience. The development of programs such as ATTDES permit the use of advanced techniques by those with less experience. Typical tasks include the mission analysis to select stabilization and damping schemes, attitude determination sensors and algorithms, and control system designs to meet program requirements. ATTDES is a system that includes all of these activities, including high fidelity orbit environment models that can be used for preliminary analysis, parameter selection, stabilization schemes, the development of estimators covariance analyses, and optimization, and can support ongoing orbit activities. The modification of existing simulations to model new configurations for these purposes can be an expensive, time consuming activity that becomes a pacing item in the development and operation of such new systems. The use of an integrated tool such as ATTDES significantly reduces the effort and time required for these tasks.

  5. Force measurements in stiff, 3D, opaque granular materials

    NASA Astrophysics Data System (ADS)

    Hurley, Ryan C.; Hall, Stephen A.; Andrade, José E.; Wright, Jonathan

    2017-06-01

    We present results from two experiments that provide the first quantification of inter-particle force networks in stiff, 3D, opaque granular materials. Force vectors between all grains were determined using a mathematical optimization technique that seeks to satisfy grain equilibrium and strain measurements. Quantities needed in the optimization - the spatial location of the inter-particle contact network and tensor grain strains - were found using 3D X-ray diffraction and X-ray computed tomography. The statistics of the force networks are consistent with those found in past simulations and 2D experiments. In particular, we observe an exponential decay of normal forces above the mean and a partition of forces into strong and weak networks. In the first experiment, involving 77 single-crystal quartz grains, we also report on the temporal correlation of the force network across two sequential load cycles. In the second experiment, involving 1099 single-crystal ruby grains, we characterize force network statistics at low levels of compression.

  6. SU-E-T-478: Sliding Window Multi-Criteria IMRT Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craft, D; Papp, D; Unkelbach, J

    2014-06-01

    Purpose: To demonstrate a method for what-you-see-is-what-you-get multi-criteria Pareto surface navigation for step and shoot IMRT treatment planning. Methods: We show mathematically how multiple sliding window treatment plans can be averaged to yield a single plan whose dose distribution is the dosimetric average of the averaged plans. This is incorporated into the Pareto surface navigation based approach to treatment planning in such a way that as the user navigates the surface, the plans he/she is viewing are ready to be delivered (i.e. there is no extra ‘segment the plans’ step that often leads to unacceptable plan degradation in step andmore » shoot Pareto surface navigation). We also describe how the technique can be applied to VMAT. Briefly, sliding window VMAT plans are created such that MLC leaves paint out fluence maps every 15 degrees or so. These fluence map leaf trajectories are averaged in the same way the static beam IMRT ones are. Results: We show mathematically that fluence maps are exactly averaged using our leaf sweep averaging algorithm. Leaf transmission and output factor corrections effects, which are ignored in this work, can lead to small errors in terms of the dose distributions not being exactly averaged even though the fluence maps are. However, our demonstrations show that the dose distributions are almost exactly averaged as well. We demonstrate the technique both for IMRT and VMAT. Conclusions: By turning to sliding window delivery, we show that the problem of losing plan fidelity during the conversion of an idealized fluence map plan into a deliverable plan is remedied. This will allow for multicriteria optimization that avoids the pitfall that the planning has to be redone after the conversion into MLC segments due to plan quality decline. David Craft partially funded by RaySearch Laboratories.« less

  7. Optimization,Modeling, and Control: Applications to Klystron Designing and Hepatitis C Virus Dynamics

    NASA Astrophysics Data System (ADS)

    Lankford, George Bernard

    In this dissertation, we address applying mathematical and numerical techniques in the fields of high energy physics and biomedical sciences. The first portion of this thesis presents a method for optimizing the design of klystron circuits. A klystron is an electron beam tube lined with cavities that emit resonant frequencies to velocity modulate electrons that pass through the tube. Radio frequencies (RF) inserted in the klystron are amplified due to the velocity modulation of the electrons. The routine described in this work automates the selection of cavity positions, resonant frequencies, quality factors, and other circuit parameters to maximize the efficiency with required gain. The method is based on deterministic sampling methods. We will describe the procedure and give several examples for both narrow and wide band klystrons, using the klystron codes AJDISK (Java) and TESLA (Python). The rest of the dissertation is dedicated to developing, calibrating and using a mathematical model for hepatitis C dynamics with triple drug combination therapy. Groundbreaking new drugs, called direct acting antivirals, have been introduced recently to fight off chronic hepatitis C virus infection. The model we introduce is for hepatitis C dynamics treated with the direct acting antiviral drug, telaprevir, along with traditional interferon and ribavirin treatments to understand how this therapy affects the viral load of patients exhibiting different types of response. We use sensitivity and identifiability techniques to determine which parameters can be best estimated from viral load data. We use these estimations to give patient-specific fits of the model to partial viral response, end-of-treatment response, and breakthrough patients. We will then revise the model to incorporate an immune response dynamic to more accurately describe the dynamics. Finally, we will implement a suboptimal control to acquire a drug treatment regimen that will alleviate the systemic cost associated with constant drug treatment.

  8. An objective function exploiting suboptimal solutions in metabolic networks

    PubMed Central

    2013-01-01

    Background Flux Balance Analysis is a theoretically elegant, computationally efficient, genome-scale approach to predicting biochemical reaction fluxes. Yet FBA models exhibit persistent mathematical degeneracy that generally limits their predictive power. Results We propose a novel objective function for cellular metabolism that accounts for and exploits degeneracy in the metabolic network to improve flux predictions. In our model, regulation drives metabolism toward a region of flux space that allows nearly optimal growth. Metabolic mutants deviate minimally from this region, a function represented mathematically as a convex cone. Near-optimal flux configurations within this region are considered equally plausible and not subject to further optimizing regulation. Consistent with relaxed regulation near optimality, we find that the size of the near-optimal region predicts flux variability under experimental perturbation. Conclusion Accounting for suboptimal solutions can improve the predictive power of metabolic FBA models. Because fluctuations of enzyme and metabolite levels are inevitable, tolerance for suboptimality may support a functionally robust metabolic network. PMID:24088221

  9. Improving Simulated Annealing by Recasting it as a Non-Cooperative Game

    NASA Technical Reports Server (NTRS)

    Wolpert, David; Bandari, Esfandiar; Tumer, Kagan

    2001-01-01

    The game-theoretic field of COllective INtelligence (COIN) concerns the design of computer-based players engaged in a non-cooperative game so that as those players pursue their self-interests, a pre-specified global goal for the collective computational system is achieved "as a side-effect". Previous implementations of COIN algorithms have outperformed conventional techniques by up to several orders of magnitude, on domains ranging from telecommunications control to optimization in congestion problems. Recent mathematical developments have revealed that these previously developed game-theory-motivated algorithms were based on only two of the three factors determining performance. Consideration of only the third factor would instead lead to conventional optimization techniques like simulated annealing that have little to do with non-cooperative games. In this paper we present an algorithm based on all three terms at once. This algorithm can be viewed as a way to modify simulated annealing by recasting it as a non-cooperative game, with each variable replaced by a player. This recasting allows us to leverage the intelligent behavior of the individual players to substantially improve the exploration step of the simulated annealing. Experiments are presented demonstrating that this recasting improves simulated annealing by several orders of magnitude for spin glass relaxation and bin-packing.

  10. Experimental analysis of a mobile health system for mood disorders.

    PubMed

    Massey, Tammara; Marfia, Gustavo; Potkonjak, Miodrag; Sarrafzadeh, Majid

    2010-03-01

    Depression is one of the leading causes of disability. Methods are needed to quantitatively classify emotions in order to better understand and treat mood disorders. This research proposes techniques to improve communication in body sensor network (BSN) that gathers data on the affective states of the patient. These BSNs can continuously monitor, discretely quantify, and classify a patient's depressive states. In addition, data on the patient's lifestyle can be correlated with his/her physiological conditions to identify how various stimuli trigger symptoms. This continuous stream of data is an improvement over a snapshot of localized symptoms that a doctor often collects during a medical examination. Our research first quantifies how the body interferes with communication in a BSN and detects a pattern between the line of sight of an embedded device and its reception rate. Then, a mathematical model of the data using linear programming techniques determines the optimal placement and number of sensors in a BSN to improve communication. Experimental results show that the optimal placement of embedded devices can reduce power cost up to 27% and reduce hardware costs up to 47%. This research brings researchers a step closer to continuous, real-time systemic monitoring that will allow one to analyze the dynamic human physiology and understand, diagnosis, and treat mood disorders.

  11. Volumetric Verification of Multiaxis Machine Tool Using Laser Tracker

    PubMed Central

    Aguilar, Juan José

    2014-01-01

    This paper aims to present a method of volumetric verification in machine tools with linear and rotary axes using a laser tracker. Beyond a method for a particular machine, it presents a methodology that can be used in any machine type. Along this paper, the schema and kinematic model of a machine with three axes of movement, two linear and one rotational axes, including the measurement system and the nominal rotation matrix of the rotational axis are presented. Using this, the machine tool volumetric error is obtained and nonlinear optimization techniques are employed to improve the accuracy of the machine tool. The verification provides a mathematical, not physical, compensation, in less time than other methods of verification by means of the indirect measurement of geometric errors of the machine from the linear and rotary axes. This paper presents an extensive study about the appropriateness and drawbacks of the regression function employed depending on the types of movement of the axes of any machine. In the same way, strengths and weaknesses of measurement methods and optimization techniques depending on the space available to place the measurement system are presented. These studies provide the most appropriate strategies to verify each machine tool taking into consideration its configuration and its available work space. PMID:25202744

  12. A review of optimization and quantification techniques for chemical exchange saturation transfer (CEST) MRI toward sensitive in vivo imaging

    PubMed Central

    Guo, Yingkun; Zheng, Hairong; Sun, Phillip Zhe

    2015-01-01

    Chemical exchange saturation transfer (CEST) MRI is a versatile imaging method that probes the chemical exchange between bulk water and exchangeable protons. CEST imaging indirectly detects dilute labile protons via bulk water signal changes following selective saturation of exchangeable protons, which offers substantial sensitivity enhancement and has sparked numerous biomedical applications. Over the past decade, CEST imaging techniques have rapidly evolved due to contributions from multiple domains, including the development of CEST mathematical models, innovative contrast agent designs, sensitive data acquisition schemes, efficient field inhomogeneity correction algorithms, and quantitative CEST (qCEST) analysis. The CEST system that underlies the apparent CEST-weighted effect, however, is complex. The experimentally measurable CEST effect depends not only on parameters such as CEST agent concentration, pH and temperature, but also on relaxation rate, magnetic field strength and more importantly, experimental parameters including repetition time, RF irradiation amplitude and scheme, and image readout. Thorough understanding of the underlying CEST system using qCEST analysis may augment the diagnostic capability of conventional imaging. In this review, we provide a concise explanation of CEST acquisition methods and processing algorithms, including their advantages and limitations, for optimization and quantification of CEST MRI experiments. PMID:25641791

  13. Undergraduate Mathematics Students' Pronumeral Misconceptions

    ERIC Educational Resources Information Center

    Bardini, Caroline; Vincent, Jill; Pierce, Robyn; King, Deborah

    2014-01-01

    Despite an emphasis on manipulative algebraic techniques in secondary school algebra, many tertiary mathematics students have mastered these skills without conceptual understanding. A significant number of students with high tertiary entrance ranks enrolled in first semester university mathematics were found to have misconceptions relating to…

  14. Mathematical Education for Geographers

    ERIC Educational Resources Information Center

    Wilson, Alan

    1978-01-01

    Outlines mathematical topics of use to college geography students identifies teaching methods for mathematical techniques in geography at the University of Leeds; and discusses problem of providing students with a framework for synthesizing all content of geography education. For journal availability, see SO 506 593. (Author/AV)

  15. Design of experiment approach for the process optimisation of microwave assisted extraction of lupeol from Ficus racemosa leaves using response surface methodology.

    PubMed

    Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C

    2013-01-01

    Triterpenoids are a group of important phytocomponents from Ficus racemosa (syn. Ficus glomerata Roxb.) that are known to possess diverse pharmacological activities and which have prompted the development of various extraction techniques and strategies for its better utilisation. To develop an effective, rapid and ecofriendly microwave-assisted extraction (MAE) strategy to optimise the extraction of a potent bioactive triterpenoid compound, lupeol, from young leaves of Ficus racemosa using response surface methodology (RSM) for industrial scale-up. Initially a Plackett-Burman design matrix was applied to identify the most significant extraction variables amongst microwave power, irradiation time, particle size, solvent:sample ratio loading, varying solvent strength and pre-leaching time on lupeol extraction. Among the six variables tested, microwave power, irradiation time and solvent-sample/loading ratio were found to have a significant effect (P < 0.05) on lupeol extraction and were fitted to a Box-Behnken-design-generated quadratic polynomial equation to predict optimal extraction conditions as well as to locate operability regions with maximum yield. The optimal conditions were microwave power of 65.67% of 700 W, extraction time of 4.27 min and solvent-sample ratio loading of 21.33 mL/g. Confirmation trials under the optimal conditions gave an experimental yield (18.52 µg/g of dry leaves) close to the RSM predicted value of 18.71 µg/g. Under the optimal conditions the mathematical model was found to be well fitted with the experimental data. The MAE was found to be a more rapid, convenient and appropriate extraction method, with a higher yield and lower solvent consumption when compared with conventional extraction techniques. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  17. Optimal Shakedown of the Thin-Wall Metal Structures Under Strength and Stiffness Constraints

    NASA Astrophysics Data System (ADS)

    Alawdin, Piotr; Liepa, Liudas

    2017-06-01

    Classical optimization problems of metal structures confined mainly with 1st class cross-sections. But in practice it is common to use the cross-sections of higher classes. In this paper, a new mathematical model for described shakedown optimization problem for metal structures, which elements are designed from 1st to 4th class cross-sections, under variable quasi-static loads is presented. The features of limited plastic redistribution of forces in the structure with thin-walled elements there are taken into account. Authors assume the elastic-plastic flexural buckling in one plane without lateral torsional buckling behavior of members. Design formulae for Methods 1 and 2 for members are analyzed. Structures stiffness constrains are also incorporated in order to satisfy the limit serviceability state requirements. With the help of mathematical programming theory and extreme principles the structure optimization algorithm is developed and justified with the numerical experiment for the metal plane frames.

  18. Uncertainty quantification and optimal decisions

    PubMed Central

    2017-01-01

    A mathematical model can be analysed to construct policies for action that are close to optimal for the model. If the model is accurate, such policies will be close to optimal when implemented in the real world. In this paper, the different aspects of an ideal workflow are reviewed: modelling, forecasting, evaluating forecasts, data assimilation and constructing control policies for decision-making. The example of the oil industry is used to motivate the discussion, and other examples, such as weather forecasting and precision agriculture, are used to argue that the same mathematical ideas apply in different contexts. Particular emphasis is placed on (i) uncertainty quantification in forecasting and (ii) how decisions are optimized and made robust to uncertainty in models and judgements. This necessitates full use of the relevant data and by balancing costs and benefits into the long term may suggest policies quite different from those relevant to the short term. PMID:28484343

  19. Optimal manpower allocation in aircraft line maintenance (Case in GMF AeroAsia)

    NASA Astrophysics Data System (ADS)

    Puteri, V. E.; Yuniaristanto, Hisjam, M.

    2017-11-01

    This paper presents a mathematical modeling to find the optimal manpower allocation in an aircraft line maintenance. This research focuses on assigning the number and type of manpower that allocated to each service. This study considers the licenced worker or Aircraft Maintenance Engineer Licence (AMEL) and non licenced worker or Aircraft Maintenance Technician (AMT). In this paper, we also consider the relationship of each station in terms of the possibility to transfer the manpower among them. The optimization model considers the number of manpowers needed for each service and the requirement of AMEL worker. This paper aims to determine the optimal manpower allocation using the mathematical modeling. The objective function of the model is to find the minimum employee expenses. The model was solved using the ILOG CPLEX software. The results show that the manpower allocation can meet the manpower need and the all load can be served.

  20. Application of multi-objective optimization to pooled experiments of next generation sequencing for detection of rare mutations.

    PubMed

    Zilinskas, Julius; Lančinskas, Algirdas; Guarracino, Mario Rosario

    2014-01-01

    In this paper we propose some mathematical models to plan a Next Generation Sequencing experiment to detect rare mutations in pools of patients. A mathematical optimization problem is formulated for optimal pooling, with respect to minimization of the experiment cost. Then, two different strategies to replicate patients in pools are proposed, which have the advantage to decrease the overall costs. Finally, a multi-objective optimization formulation is proposed, where the trade-off between the probability to detect a mutation and overall costs is taken into account. The proposed solutions are devised in pursuance of the following advantages: (i) the solution guarantees mutations are detectable in the experimental setting, and (ii) the cost of the NGS experiment and its biological validation using Sanger sequencing is minimized. Simulations show replicating pools can decrease overall experimental cost, thus making pooling an interesting option.

  1. Beam position monitor engineering

    NASA Astrophysics Data System (ADS)

    Smith, Stephen R.

    1997-01-01

    The design of beam position monitors often involves challenging system design choices. Position transducers must be robust, accurate, and generate adequate position signal without unduly disturbing the beam. Electronics must be reliable and affordable, usually while meeting tough requirements on precision, accuracy, and dynamic range. These requirements may be difficult to achieve simultaneously, leading the designer into interesting opportunities for optimization or compromise. Some useful techniques and tools are shown. Both finite element analysis and analytic techniques will be used to investigate quasi-static aspects of electromagnetic fields such as the impedance of and the coupling of beam to striplines or buttons. Finite-element tools will be used to understand dynamic aspects of the electromagnetic fields of beams, such as wake fields and transmission-line and cavity effects in vacuum-to-air feedthroughs. Mathematical modeling of electrical signals through a processing chain will be demonstrated, in particular to illuminate areas where neither a pure time-domain nor a pure frequency-domain analysis is obviously advantageous. Emphasis will be on calculational techniques, in particular on using both time domain and frequency domain approaches to the applicable parts of interesting problems.

  2. Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.

    PubMed

    Schneider, Martin; Iskander, D Robert; Collins, Michael J

    2009-02-01

    High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.

  3. Numerical model updating technique for structures using firefly algorithm

    NASA Astrophysics Data System (ADS)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  4. The analysis of mathematics literacy on PMRI learning with media schoology of junior high school students

    NASA Astrophysics Data System (ADS)

    Wardono; Mariani, S.

    2018-03-01

    Indonesia as a developing country in the future will have high competitiveness if its students have high mathematics literacy ability. The current reality from year to year rankings of PISA mathematics literacy Indonesian students are still not good. This research is motivated by the importance and low ability of the mathematics literacy. The purpose of this study is to: (1) analyze the effectiveness of PMRI learning with media Schoology, (2) describe the ability of students' mathematics literacy on PMRI learning with media Schoology which is reviewed based on seven components of mathematics literacy, namely communication, mathematizing, representation, reasoning, devising strategies, using symbols, and using mathematics tool. The method used in this research is the method of sequential design method mix. Techniques of data collection using observation, interviews, tests, and documentation. Data analysis techniques use proportion test, appellate test, and use descriptive analysis. Based on the data analysis, it can be concluded; (1) PMRI learning with media Schoology effectively improve the ability of mathematics literacy because of the achievement of classical completeness, students' mathematics literacy ability in PMRI learning with media Schoology is higher than expository learning, and there is increasing ability of mathematics literacy in PMRI learning with media Schoology of 30%. (2) Highly capable students attain excellent mathematics literacy skills, can work using broad thinking with appropriate resolution strategies. Students who are capable of achieving good mathematics literacy skills can summarize information, present problem-solving processes, and interpret solutions. low-ability students have reached the level of ability of mathematics literacy good enough that can solve the problem in a simple way.

  5. The solution of private problems for optimization heat exchangers parameters

    NASA Astrophysics Data System (ADS)

    Melekhin, A.

    2017-11-01

    The relevance of the topic due to the decision of problems of the economy of resources in heating systems of buildings. To solve this problem we have developed an integrated method of research which allows solving tasks on optimization of parameters of heat exchangers. This method decides multicriteria optimization problem with the program nonlinear optimization on the basis of software with the introduction of an array of temperatures obtained using thermography. The author have developed a mathematical model of process of heat exchange in heat exchange surfaces of apparatuses with the solution of multicriteria optimization problem and check its adequacy to the experimental stand in the visualization of thermal fields, an optimal range of managed parameters influencing the process of heat exchange with minimal metal consumption and the maximum heat output fin heat exchanger, the regularities of heat exchange process with getting generalizing dependencies distribution of temperature on the heat-release surface of the heat exchanger vehicles, defined convergence of the results of research in the calculation on the basis of theoretical dependencies and solving mathematical model.

  6. What Would the Mathematics Curriculum Look Like if Values Were the Focus?

    ERIC Educational Resources Information Center

    Seah, Wee Tiong; Andersson, Annica; Bishop, Alan; Clarkson, Philip

    2016-01-01

    The crucial reason for the common dislike, fear, and even hatred of mathematics by students and others is probably not the nature of mathematics itself, but the way the subject is portrayed and taught. We propose that instead of a mathematics curriculum that focuses on concepts and techniques (which is often seen), it might be more productive if…

  7. To Know and to Teach: Mathematical Pedagogy from a Historical Context.

    ERIC Educational Resources Information Center

    Swetz, Frank

    1995-01-01

    Investigated historical works for pedagogical techniques. Found the use of instructional discourse, logical sequencing of mathematical problems and exercises, and employment of visual aids. Concludes that much of present day mathematical pedagogy evolved from distant historical antecedents. (30 references) (Author/MKR)

  8. Experimenting with Mathematical Biology

    ERIC Educational Resources Information Center

    Sanft, Rebecca; Walter, Anne

    2016-01-01

    St. Olaf College recently added a Mathematical Biology concentration to its curriculum. The core course, Mathematics of Biology, was redesigned to include a wet laboratory. The lab classes required students to collect data and implement the essential modeling techniques of formulation, implementation, validation, and analysis. The four labs…

  9. Mathematically Gifted Third Graders--A Challenge in the Classroom.

    ERIC Educational Resources Information Center

    Wolfle, Jane A.

    1988-01-01

    The third-grade classroom teacher can identify mathematically gifted students and can provide them with opportunities for extending their understanding and enjoyment of mathematics through use of such techniques as content sophistication, enrichment, peer tutoring, curriculum compacting, puzzles, and math centers. (Author/JDD)

  10. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    NASA Astrophysics Data System (ADS)

    Perez, Hector Eduardo

    This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the notion of interval observers to PDE models using a sensitivity-based approach. Practically, this chapter quantifies the sensitivity of battery state estimates to parameter variations, enabling robust battery management schemes. The effectiveness of the proposed sensitivity-based interval observers is verified via a numerical study for the range of uncertain parameters. Chapter 4: This chapter seeks to derive insight on battery charging control using electrochemistry models. Directly using full order complex multi-partial differential equation (PDE) electrochemical battery models is difficult and sometimes impossible to implement. This chapter develops an approach for obtaining optimal charge control schemes, while ensuring safety through constraint satisfaction. An optimal charge control problem is mathematically formulated via a coupled reduced order electrochemical-thermal model which conserves key electrochemical and thermal state information. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting nonlinear multi-state optimal control problem. Minimum time charge protocols are analyzed in detail subject to solid and electrolyte phase concentration constraints, as well as temperature constraints. The optimization scheme is examined using different input current bounds, and an insight on battery design for fast charging is provided. Experimental results are provided to compare the tradeoffs between an electrochemical-thermal model based optimal charge protocol and a traditional charge protocol. Chapter 5: Fast and safe charging protocols are crucial for enhancing the practicality of batteries, especially for mobile applications such as smartphones and electric vehicles. This chapter proposes an innovative approach to devising optimally health-conscious fast-safe charge protocols. A multi-objective optimal control problem is mathematically formulated via a coupled electro-thermal-aging battery model, where electrical and aging sub-models depend upon the core temperature captured by a two-state thermal sub-model. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting highly nonlinear six-state optimal control problem. Charge time and health degradation are therefore optimally traded off, subject to both electrical and thermal constraints. Minimum-time, minimum-aging, and balanced charge scenarios are examined in detail. Sensitivities to the upper voltage bound, ambient temperature, and cooling convection resistance are investigated as well. Experimental results are provided to compare the tradeoffs between a balanced and traditional charge protocol. Chapter 6: This chapter provides concluding remarks on the findings of this dissertation and a discussion of future work.

  11. An overview of the mathematical and statistical analysis component of RICIS

    NASA Technical Reports Server (NTRS)

    Hallum, Cecil R.

    1987-01-01

    Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.

  12. Unbiased contaminant removal for 3D galaxy power spectrum measurements

    NASA Astrophysics Data System (ADS)

    Kalus, B.; Percival, W. J.; Bacon, D. J.; Samushia, L.

    2016-11-01

    We assess and develop techniques to remove contaminants when calculating the 3D galaxy power spectrum. We separate the process into three separate stages: (I) removing the contaminant signal, (II) estimating the uncontaminated cosmological power spectrum and (III) debiasing the resulting estimates. For (I), we show that removing the best-fitting contaminant (mode subtraction) and setting the contaminated components of the covariance to be infinite (mode deprojection) are mathematically equivalent. For (II), performing a quadratic maximum likelihood (QML) estimate after mode deprojection gives an optimal unbiased solution, although it requires the manipulation of large N_mode^2 matrices (Nmode being the total number of modes), which is unfeasible for recent 3D galaxy surveys. Measuring a binned average of the modes for (II) as proposed by Feldman, Kaiser & Peacock (FKP) is faster and simpler, but is sub-optimal and gives rise to a biased solution. We present a method to debias the resulting FKP measurements that does not require any large matrix calculations. We argue that the sub-optimality of the FKP estimator compared with the QML estimator, caused by contaminants, is less severe than that commonly ignored due to the survey window.

  13. Particle swarm optimization algorithm for optimizing assignment of blood in blood banking system.

    PubMed

    Olusanya, Micheal O; Arasomwan, Martins A; Adewumi, Aderemi O

    2015-01-01

    This paper reports the performance of particle swarm optimization (PSO) for the assignment of blood to meet patients' blood transfusion requests for blood transfusion. While the drive for blood donation lingers, there is need for effective and efficient management of available blood in blood banking systems. Moreover, inherent danger of transfusing wrong blood types to patients, unnecessary importation of blood units from external sources, and wastage of blood products due to nonusage necessitate the development of mathematical models and techniques for effective handling of blood distribution among available blood types in order to minimize wastages and importation from external sources. This gives rise to the blood assignment problem (BAP) introduced recently in literature. We propose a queue and multiple knapsack models with PSO-based solution to address this challenge. Simulation is based on sets of randomly generated data that mimic real-world population distribution of blood types. Results obtained show the efficiency of the proposed algorithm for BAP with no blood units wasted and very low importation, where necessary, from outside the blood bank. The result therefore can serve as a benchmark and basis for decision support tools for real-life deployment.

  14. A non-linear data mining parameter selection algorithm for continuous variables

    PubMed Central

    Razavi, Marianne; Brady, Sean

    2017-01-01

    In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829

  15. An Extended EPQ-Based Problem with a Discontinuous Delivery Policy, Scrap Rate, and Random Breakdown

    PubMed Central

    Song, Ming-Syuan; Chen, Hsin-Mei; Chiu, Yuan-Shyi P.

    2015-01-01

    In real supply chain environments, the discontinuous multidelivery policy is often used when finished products need to be transported to retailers or customers outside the production units. To address this real-life production-shipment situation, this study extends recent work using an economic production quantity- (EPQ-) based inventory model with a continuous inventory issuing policy, defective items, and machine breakdown by incorporating a multiple delivery policy into the model to replace the continuous policy and investigates the effect on the optimal run time decision for this specific EPQ model. Next, we further expand the scope of the problem to combine the retailer's stock holding cost into our study. This enhanced EPQ-based model can be used to reflect the situation found in contemporary manufacturing firms in which finished products are delivered to the producer's own retail stores and stocked there for sale. A second model is developed and studied. With the help of mathematical modeling and optimization techniques, the optimal run times that minimize the expected total system costs comprising costs incurred in production units, transportation, and retail stores are derived, for both models. Numerical examples are provided to demonstrate the applicability of our research results. PMID:25821853

  16. An extended EPQ-based problem with a discontinuous delivery policy, scrap rate, and random breakdown.

    PubMed

    Chiu, Singa Wang; Lin, Hong-Dar; Song, Ming-Syuan; Chen, Hsin-Mei; Chiu, Yuan-Shyi P

    2015-01-01

    In real supply chain environments, the discontinuous multidelivery policy is often used when finished products need to be transported to retailers or customers outside the production units. To address this real-life production-shipment situation, this study extends recent work using an economic production quantity- (EPQ-) based inventory model with a continuous inventory issuing policy, defective items, and machine breakdown by incorporating a multiple delivery policy into the model to replace the continuous policy and investigates the effect on the optimal run time decision for this specific EPQ model. Next, we further expand the scope of the problem to combine the retailer's stock holding cost into our study. This enhanced EPQ-based model can be used to reflect the situation found in contemporary manufacturing firms in which finished products are delivered to the producer's own retail stores and stocked there for sale. A second model is developed and studied. With the help of mathematical modeling and optimization techniques, the optimal run times that minimize the expected total system costs comprising costs incurred in production units, transportation, and retail stores are derived, for both models. Numerical examples are provided to demonstrate the applicability of our research results.

  17. Fast approximate delivery of fluence maps for IMRT and VMAT

    NASA Astrophysics Data System (ADS)

    Balvert, Marleen; Craft, David

    2017-02-01

    In this article we provide a method to generate the trade-off between delivery time and fluence map matching quality for dynamically delivered fluence maps. At the heart of our method lies a mathematical programming model that, for a given duration of delivery, optimizes leaf trajectories and dose rates such that the desired fluence map is reproduced as well as possible. We begin with the single fluence map case and then generalize the model and the solution technique to the delivery of sequential fluence maps. The resulting large-scale, non-convex optimization problem was solved using a heuristic approach. We test our method using a prostate case and a head and neck case, and present the resulting trade-off curves. Analysis of the leaf trajectories reveals that short time plans have larger leaf openings in general than longer delivery time plans. Our method allows one to explore the continuum of possibilities between coarse, large segment plans characteristic of direct aperture approaches and narrow field plans produced by sliding window approaches. Exposing this trade-off will allow for an informed choice between plan quality and solution time. Further research is required to speed up the optimization process to make this method clinically implementable.

  18. Mathematical études: embedding opportunities for developing procedural fluency within rich mathematical contexts

    NASA Astrophysics Data System (ADS)

    Foster, Colin

    2013-07-01

    In a high-stakes assessment culture, it is clearly important that learners of mathematics develop the necessary fluency and confidence to perform well on the specific, narrowly defined techniques that will be tested. However, an overemphasis on the training of piecemeal mathematical skills at the expense of more independent engagement with richer, multifaceted tasks risks devaluing the subject and failing to give learners an authentic and enjoyable experience of being a mathematician. Thus, there is a pressing need for mathematical tasks which embed the practice of essential techniques within a richer, exploratory and investigative context. Such tasks can be justified to school management or to more traditional mathematics teachers as vital practice of important skills; at the same time, they give scope to progressive teachers who wish to work in more exploratory ways. This paper draws on the notion of a musical étude to develop a powerful and versatile approach in which these apparently contradictory aspects of teaching mathematics can be harmoniously combined. I illustrate the tactic in three central areas of the high-school mathematics curriculum: plotting Cartesian coordinates, solving linear equations and performing enlargements. In each case, extensive practice of important procedures takes place alongside more thoughtful and mathematically creative activity.

  19. Techniques Use by Science, Technology and Mathematics (STM) Teachers for Controlling Undesirable Classroom Behaviours in Anambra State Secondary Schools

    ERIC Educational Resources Information Center

    Chinelo, Okigbo Ebele; Nwanneka, Okoli Josephine

    2016-01-01

    This study investigated the techniques used by secondary school Science Technology and Mathematics (STM) teachers in controlling undesirable behaviours in their classrooms. It adopted descriptive survey design in which 178 Anambra State teachers teaching STM subjects in senior secondary were involved in the research. Two sections of questionnaire…

  20. Examining the Changes in Novice and Experienced Mathematics Teachers' Questioning Techniques through the Lesson Study Process

    ERIC Educational Resources Information Center

    Ong, Ewe Gnoh; Lim, Chap Sam; Ghazali, Munirah

    2010-01-01

    The purpose of this study was to examine the changes in novice and experienced mathematics teachers' questioning techniques. This study was conducted in Sarawak where ten (experienced and novice) teachers from two schools underwent the lesson study process for fifteen months. Four data collection methods namely, observation, interview, lesson…

  1. Finite element meshing approached as a global minimization process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WITKOWSKI,WALTER R.; JUNG,JOSEPH; DOHRMANN,CLARK R.

    2000-03-01

    The ability to generate a suitable finite element mesh in an automatic fashion is becoming the key to being able to automate the entire engineering analysis process. However, placing an all-hexahedron mesh in a general three-dimensional body continues to be an elusive goal. The approach investigated in this research is fundamentally different from any other that is known of by the authors. A physical analogy viewpoint is used to formulate the actual meshing problem which constructs a global mathematical description of the problem. The analogy used was that of minimizing the electrical potential of a system charged particles within amore » charged domain. The particles in the presented analogy represent duals to mesh elements (i.e., quads or hexes). Particle movement is governed by a mathematical functional which accounts for inter-particles repulsive, attractive and alignment forces. This functional is minimized to find the optimal location and orientation of each particle. After the particles are connected a mesh can be easily resolved. The mathematical description for this problem is as easy to formulate in three-dimensions as it is in two- or one-dimensions. The meshing algorithm was developed within CoMeT. It can solve the two-dimensional meshing problem for convex and concave geometries in a purely automated fashion. Investigation of the robustness of the technique has shown a success rate of approximately 99% for the two-dimensional geometries tested. Run times to mesh a 100 element complex geometry were typically in the 10 minute range. Efficiency of the technique is still an issue that needs to be addressed. Performance is an issue that is critical for most engineers generating meshes. It was not for this project. The primary focus of this work was to investigate and evaluate a meshing algorithm/philosophy with efficiency issues being secondary. The algorithm was also extended to mesh three-dimensional geometries. Unfortunately, only simple geometries were tested before this project ended. The primary complexity in the extension was in the connectivity problem formulation. Defining all of the interparticle interactions that occur in three-dimensions and expressing them in mathematical relationships is very difficult.« less

  2. Milestones of mathematical model for business process management related to cost estimate documentation in petroleum industry

    NASA Astrophysics Data System (ADS)

    Khamidullin, R. I.

    2018-05-01

    The paper is devoted to milestones of the optimal mathematical model for a business process related to cost estimate documentation compiled during construction and reconstruction of oil and gas facilities. It describes the study and analysis of fundamental issues in petroleum industry, which are caused by economic instability and deterioration of a business strategy. Business process management is presented as business process modeling aimed at the improvement of the studied business process, namely main criteria of optimization and recommendations for the improvement of the above-mentioned business model.

  3. An intelligent emissions controller for fuel lean gas reburn in coal-fired power plants.

    PubMed

    Reifman, J; Feldman, E E; Wei, T Y; Glickert, R W

    2000-02-01

    The application of artificial intelligence techniques for performance optimization of the fuel lean gas reburn (FLGR) system is investigated. A multilayer, feedforward artificial neural network is applied to model static nonlinear relationships between the distribution of injected natural gas into the upper region of the furnace of a coal-fired boiler and the corresponding oxides of nitrogen (NOx) emissions exiting the furnace. Based on this model, optimal distributions of injected gas are determined such that the largest NOx reduction is achieved for each value of total injected gas. This optimization is accomplished through the development of a new optimization method based on neural networks. This new optimal control algorithm, which can be used as an alternative generic tool for solving multidimensional nonlinear constrained optimization problems, is described and its results are successfully validated against an off-the-shelf tool for solving mathematical programming problems. Encouraging results obtained using plant data from one of Commonwealth Edison's coal-fired electric power plants demonstrate the feasibility of the overall approach. Preliminary results show that the use of this intelligent controller will also enable the determination of the most cost-effective operating conditions of the FLGR system by considering, along with the optimal distribution of the injected gas, the cost differential between natural gas and coal and the open-market price of NOx emission credits. Further study, however, is necessary, including the construction of a more comprehensive database, needed to develop high-fidelity process models and to add carbon monoxide (CO) emissions to the model of the gas reburn system.

  4. D-Optimal Experimental Design for Contaminant Source Identification

    NASA Astrophysics Data System (ADS)

    Sai Baba, A. K.; Alexanderian, A.

    2016-12-01

    Contaminant source identification seeks to estimate the release history of a conservative solute given point concentration measurements at some time after the release. This can be mathematically expressed as an inverse problem, with a linear observation operator or a parameter-to-observation map, which we tackle using a Bayesian approach. Acquisition of experimental data can be laborious and expensive. The goal is to control the experimental parameters - in our case, the sparsity of the sensors, to maximize the information gain subject to some physical or budget constraints. This is known as optimal experimental design (OED). D-optimal experimental design seeks to maximize the expected information gain, and has long been considered the gold standard in the statistics community. Our goal is to develop scalable methods for D-optimal experimental designs involving large-scale PDE constrained problems with high-dimensional parameter fields. A major challenge for the OED, is that a nonlinear optimization algorithm for the D-optimality criterion requires repeated evaluation of objective function and gradient involving the determinant of large and dense matrices - this cost can be prohibitively expensive for applications of interest. We propose novel randomized matrix techniques that bring down the computational costs of the objective function and gradient evaluations by several orders of magnitude compared to the naive approach. The effect of randomized estimators on the accuracy and the convergence of the optimization solver will be discussed. The features and benefits of our new approach will be demonstrated on a challenging model problem from contaminant source identification involving the inference of the initial condition from spatio-temporal observations in a time-dependent advection-diffusion problem.

  5. The Role of the Mathematics Supervisor in K-12 Education

    ERIC Educational Resources Information Center

    Greenes, Carole

    2013-01-01

    The implementation of "the Common Core Standards for Mathematics" and the assessments of those concepts, skills, reasoning methods, and mathematical practices that are in development necessitate the updating of teachers' knowledge of content, pedagogical techniques to enhance engagement and persistence, and strategies for responding to…

  6. Learn from the Masters.

    ERIC Educational Resources Information Center

    Swetz, Frank, Ed.; And Others

    This book contains papers that identify and clarify techniques and pedagogical approaches for using the history of mathematics in teaching. The chapters are separated into two sections, one containing 8 chapters about secondary school mathematics and the other containing 15 chapters on higher mathematics. The first section discusses topics such as…

  7. Optimization of computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikhalevich, V.S.; Sergienko, I.V.; Zadiraka, V.K.

    1994-11-01

    This article examines some topics of optimization of computations, which have been discussed at 25 seminar-schools and symposia organized by the V.M. Glushkov Institute of Cybernetics of the Ukrainian Academy of Sciences since 1969. We describe the main directions in the development of computational mathematics and present some of our own results that reflect a certain design conception of speed-optimal and accuracy-optimal (or nearly optimal) algorithms for various classes of problems, as well as a certain approach to optimization of computer computations.

  8. EFQPSK Versus CERN: A Comparative Study

    NASA Technical Reports Server (NTRS)

    Borah, Deva K.; Horan, Stephen

    2001-01-01

    This report presents a comparative study on Enhanced Feher's Quadrature Phase Shift Keying (EFQPSK) and Constrained Envelope Root Nyquist (CERN) techniques. These two techniques have been developed in recent times to provide high spectral and power efficiencies under nonlinear amplifier environment. The purpose of this study is to gain insights into these techniques and to help system planners and designers with an appropriate set of guidelines for using these techniques. The comparative study presented in this report relies on effective simulation models and procedures. Therefore, a significant part of this report is devoted to understanding the mathematical and simulation models of the techniques and their set-up procedures. In particular, mathematical models of EFQPSK and CERN, effects of the sampling rate in discrete time signal representation, and modeling of nonlinear amplifiers and predistorters have been considered in detail. The results of this study show that both EFQPSK and CERN signals provide spectrally efficient communications compared to filtered conventional linear modulation techniques when a nonlinear power amplifier is used. However, there are important differences. The spectral efficiency of CERN signals, with a small amount of input backoff, is significantly better than that of EFQPSK signals if the nonlinear amplifier is an ideal clipper. However, to achieve such spectral efficiencies with a practical nonlinear amplifier, CERN processing requires a predistorter which effectively translates the amplifier's characteristics close to those of an ideal clipper. Thus, the spectral performance of CERN signals strongly depends on the predistorter. EFQPSK signals, on the other hand, do not need such predistorters since their spectra are almost unaffected by the nonlinear amplifier, Ibis report discusses several receiver structures for EFQPSK signals. It is observed that optimal receiver structures can be realized for both coded and uncoded EFQPSK signals with not too much increase in computational complexity. When a nonlinear amplifier is used, the bit error rate (BER) performance of the CERN signals with a matched filter receiver is found to be more than one decibel (dB) worse compared to the bit error performance of EFQPSK signals. Although channel coding is found to provide BER performance improvement for both EFQPSK and CERN signals, the performance of EFQPSK signals remains better than that of CERN. Optimal receiver structures for CERN signals with nonlinear equalization is left as a possible future work. Based on the numerical results, it is concluded that, in nonlinear channels, CERN processing leads towards better bandwidth efficiency with a compromise in power efficiency. Hence for bandwidth efficient communications needs, CERN is a good solution provided effective adaptive predistorters can be realized. On the other hand, EFQPSK signals provide a good power efficient solution with a compromise in band width efficiency.

  9. Melting Heat in Radiative Flow of Carbon Nanotubes with Homogeneous-Heterogeneous Reactions

    NASA Astrophysics Data System (ADS)

    Hayat, Tasawar; Muhammad, Khursheed; Muhammad, Taseer; Alsaedi, Ahmed

    2018-04-01

    The present article provides mathematical modeling for melting heat and thermal radiation in stagnation-point flow of carbon nanotubes towards a nonlinear stretchable surface of variable thickness. The process of homogeneous-heterogeneous reactions is considered. Diffusion coefficients are considered equal for both reactant and autocatalyst. Water and gasoline oil are taken as base fluids. The conversion of partial differential system to ordinary differential system is done by suitable transformations. Optimal homotopy technique is employed for the solutions development of velocity, temperature, concentration, skin friction and local Nusselt number. Graphical results for various values of pertinent parameters are displayed and discussed. Our results indicate that the skin friction coefficient and local Nusselt number are enhanced for larger values of nanoparticles volume fraction.

  10. Guaranteed estimation of solutions to Helmholtz transmission problems with uncertain data from their indirect noisy observations

    NASA Astrophysics Data System (ADS)

    Podlipenko, Yu. K.; Shestopalov, Yu. V.

    2017-09-01

    We investigate the guaranteed estimation problem of linear functionals from solutions to transmission problems for the Helmholtz equation with inexact data. The right-hand sides of equations entering the statements of transmission problems and the statistical characteristics of observation errors are supposed to be unknown and belonging to certain sets. It is shown that the optimal linear mean square estimates of the above mentioned functionals and estimation errors are expressed via solutions to the systems of transmission problems of the special type. The results and techniques can be applied in the analysis and estimation of solution to forward and inverse electromagnetic and acoustic problems with uncertain data that arise in mathematical models of the wave diffraction on transparent bodies.

  11. Application of a Functional Mathematical Index to the Evaluation of the Nutritional Quality of Potatoes

    USDA-ARS?s Scientific Manuscript database

    This paper describes the derivation and application of a new functional mathematical index that was used to evaluate the nutritional, safety, and processing quality aspects of potatoes. The index introduces the concept of an “optimal potato”, using appropriate distance and N-dimensional parameter sp...

  12. Current advances in mathematical modeling of anti-cancer drug penetration into tumor tissues.

    PubMed

    Kim, Munju; Gillies, Robert J; Rejniak, Katarzyna A

    2013-11-18

    Delivery of anti-cancer drugs to tumor tissues, including their interstitial transport and cellular uptake, is a complex process involving various biochemical, mechanical, and biophysical factors. Mathematical modeling provides a means through which to understand this complexity better, as well as to examine interactions between contributing components in a systematic way via computational simulations and quantitative analyses. In this review, we present the current state of mathematical modeling approaches that address phenomena related to drug delivery. We describe how various types of models were used to predict spatio-temporal distributions of drugs within the tumor tissue, to simulate different ways to overcome barriers to drug transport, or to optimize treatment schedules. Finally, we discuss how integration of mathematical modeling with experimental or clinical data can provide better tools to understand the drug delivery process, in particular to examine the specific tissue- or compound-related factors that limit drug penetration through tumors. Such tools will be important in designing new chemotherapy targets and optimal treatment strategies, as well as in developing non-invasive diagnosis to monitor treatment response and detect tumor recurrence.

  13. Viscosity Meaurement Technique for Metal Fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ban, Heng; Kennedy, Rory

    2015-02-09

    Metallic fuels have exceptional transient behavior, excellent thermal conductivity, and a more straightforward reprocessing path, which does not separate out pure plutonium from the process stream. Fabrication of fuel containing minor actinides and rare earth (RE) elements for irradiation tests, for instance, U-20Pu-3Am-2Np-1.0RE-15Zr samples at the Idaho National Laboratory, is generally done by melt casting in an inert atmosphere. For the design of a casting system and further scale up development, computational modeling of the casting process is needed to provide information on melt flow and solidification for process optimization. Therefore, there is a need for melt viscosity data, themore » most important melt property that controls the melt flow. The goal of the project was to develop a measurement technique that uses fully sealed melt sample with no Americium vapor loss to determine the viscosity of metallic melts and at temperatures relevant to the casting process. The specific objectives of the project were to: develop mathematical models to establish the principle of the measurement method, design and build a viscosity measurement prototype system based on the established principle, and calibrate the system and quantify the uncertainty range. The result of the project indicates that the oscillation cup technique is applicable for melt viscosity measurement. Detailed mathematical models of innovative sample ampoule designs were developed to not only determine melt viscosity, but also melt density under certain designs. Measurement uncertainties were analyzed and quantified. The result of this project can be used as the initial step toward the eventual goal of establishing a viscosity measurement system for radioactive melts.« less

  14. 47 CFR 1.2202 - Competitive bidding design options.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Section 1.2202 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants...) Procedures that utilize mathematical computer optimization software, such as integer programming, to evaluate... evaluating bids using a ranking based on specified factors. (B) Procedures that combine computer optimization...

  15. Applied optimal shape design

    NASA Astrophysics Data System (ADS)

    Mohammadi, B.; Pironneau, O.

    2002-12-01

    This paper is a short survey of optimal shape design (OSD) for fluids. OSD is an interesting field both mathematically and for industrial applications. Existence, sensitivity, correct discretization are important theoretical issues. Practical implementation issues for airplane designs are critical too. The paper is also a summary of the material covered in our recent book, Applied Optimal Shape Design, Oxford University Press, 2001.

  16. Do Dogs Know Related Rates Rather than Optimization?

    ERIC Educational Resources Information Center

    Perruchet, Pierre; Gallego, Jorge

    2006-01-01

    Although dogs seemingly follow the optimal path where they get to a ball thrown into the water, they certainly do not know the minimization function proposed in the calculus books. Trading the optimization problem for a related rates problem leads to a mathematically identical solution, which, it is argued here, is a more plausible model for the…

  17. The Role of Graphing Calculators in Mathematics Reform.

    ERIC Educational Resources Information Center

    Waits, Bert K.; Demana, Franklin

    This essay describes the role of graphing calculators in mathematics reform. Among the topics discussed are the history of graphing calculators in mathematics education, recent technological innovations, and professional development opportunities. The case is made for a balanced approach between calculator use and paper-and-pencil techniques.…

  18. Application of Mathematical Signal Processing Techniques to Mission Systems. (l’Application des techniques mathematiques du traitement du signal aux systemes de conduite des missions)

    DTIC Science & Technology

    1999-11-01

    represents the linear time invariant (LTI) response of the combined analysis /synthesis system while the second repre- sents the aliasing introduced into...effectively to implement voice scrambling systems based on time - frequency permutation . The most general form of such a system is shown in Fig. 22 where...92201 NEUILLY-SUR-SEINE CEDEX, FRANCE RTO LECTURE SERIES 216 Application of Mathematical Signal Processing Techniques to Mission Systems (1

  19. Development of a nonlinear switching function and its application to static lift characteristics of straight wings

    NASA Technical Reports Server (NTRS)

    Hewes, D. E.

    1978-01-01

    A mathematical modeling technique was developed for the lift characteristics of straight wings throughout a very wide angle of attack range. The technique employs a mathematical switching function that facilitates the representation of the nonlinear aerodynamic characteristics in the partially and fully stalled regions and permits matching empirical data within + or - 4 percent of maximum values. Although specifically developed for use in modeling the lift characteristics, the technique appears to have other applications in both aerodynamic and nonaerodynamic fields.

  20. Methods for Improving Information from ’Undesigned’ Human Factors Experiments.

    DTIC Science & Technology

    Human factors engineering, Information processing, Regression analysis , Experimental design, Least squares method, Analysis of variance, Correlation techniques, Matrices(Mathematics), Multiple disciplines, Mathematical prediction

Top